Combine multiple MTKView/CAMetalLayer display - ios

I am using multiple MTKViews to display different contents on the screen along with normal UIView's (for displaying UI). I want to synchronize presentation of these MTKViews together with the same clock. Is there a way to synchronize the presentation of these MTKViews? In principal, I can combine the layouts of these views to a single MTKView but that would kill modularity of the code and not sure if I would achieve anything on the performance with so much of overwork.

A simple approach that should work in most cases would be to compute the time at which you'd like the frame to draw and use the present(_:atTime:) method of MTLCommandBuffer instead of the present(_:) method.
To exert greater control, it helps to understand what a command buffer's present... methods do and don't do. They do not encode any command into the buffer. As documented, they basically just add a scheduled handler to themselves that calls present on the drawable.
If you're careful about it, you can arrange to present the drawable in a way that doesn't much involve the command buffer.
But, does the command buffer using a scheduled handler make sense? Shouldn't it use a completed handler? After all, you want to display the completed rendering, right?
Well, drawables are smart about presenting themselves. The present method doesn't present immediately. A drawable tracks which scheduled commands might render or write to its texture. When present is called, that arranges for the drawable to draw itself on the screen as soon as possible after all such commands have completed. (Note that this does not imply that the command buffer itself has completed. There may be additional commands that don't involve the drawable's texture that aren't yet completed.)
This provides both challenges and opportunities for syncing the presentation of multiple drawables. The challenge is that, while you can control when you call present on each drawable, that doesn't necessarily sync their actual display, because each will display as soon it can after present is called and all commands involving its texture are completed, and that last part can occur at different times for different drawables.
One possible approach to solving this is to add a presented handler to the master drawable. The handler would call present on the other 3 drawables. After all of the command buffers are scheduled, call present on the master drawable. You can use a dispatch group to determine when all of the command buffers are scheduled. Enter the group once for each command buffer and add a scheduled handler to each that leaves the group. Then set a notify block on the group that does the master present. This technique probably won't achieve perfect synchronization because there's latency between when the master drawable has actually presented and when the presented handler is called, and then latency in presenting the other drawables.
Another possible approach is to set the presentsWithTransaction property of all of your CAMetalLayers to true. Then, when it's time to present, call waitUntilScheduled on each command buffer followed by present on each drawable. (Do not use a present... method of the command buffer.) This will guarantee that all of the drawables will present during the same Core Animation transaction – that is, synchronized.

You can use presentsWithTransaction
Set presentsWithTransaction = true in MTKView
Change commandBuffer commit style
public func draw(in view: MTKView) {
...
commandBuffer?.commit()
commandBuffer?.waitUntilScheduled()
view.currentDrawable?.present() }
Now all the metal views will be displayed synchronously.
It works for me.

Related

In Swift, should I run VC initialization code in prepareForSegue or in viewDidLoad if it is a viable option when maximizing frame rate is the goal?

In Swift, is it better to run VC initialization code in prepareForSegue or in viewDidLoad if it is a viable option when maximizing frame rate is the goal?
There are many times I can choose to setup a vc by passing in an enum that tells it what vc it is and sets itself up accordingly during viewDidLoad. I could instead directly setup these values inside of prepareForSegue minimizing the work during viewDidLoad. Assuming I need to run this code on the main thread, for the smoothest UI transition, which is preferable?
If you want to strictly follow the principles of object oriented programing (and I advise you to) each object must take care of its own internal initialization and setting up and what not. As to which option is going to create a smoother user interface transition, it really does not matter since both options must run on the main thread (UIKit must run on the main thread).
I realized after posting this question that it was in ignorance because both prepareForSegue and viewDidLoad happen prior to viewDidAppear and therefore will delay UI the same amount if the both have code in them that takes place on the main queue regardless of where the code is at. So there should be virtually no difference under nearly all conditions I can currently think of.

Any method that executes before willAnimateRotationToInterfaceOrientation?

Is there any method that will execute prior to - (void)willAnimateRotationToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration?
This is very important to me because it seems like the view coordinate system switches immediately before this method is executed. I am trying to execute a method immediately before the system has decided to rotate the device, so something along the lines of - (BOOL)shouldAnimateRotationToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation would seem to be the perfect place to execute such a method (but it doesn't seem to exist in the documentation).
Thank you!
The method you are looking for is willRotateToInterfaceOrientation:duration:. The bounds have not been transformed at the time it is called.
But:
These methods are deprecated in iOS 8 (and the entire "rotation" model is completely changed), so don't become reliant upon them.
In my view it would be better ask yourself why you think you need the view coordinate system in this way. It would be better to position things using constraints that are independent of such considerations.

How to get notifications for changes to a file in ios?

I am downloading a file using AFNetworking. But I am not able to keep track of my downloadOperation (i.e. viewController recieving completion callback is dismissed). Is there any api to track the file size changes to compare against the total file size.
I don't want to monitor filesystem changes like these answers:
what-is-the-optimal-way-to-monitor-changes-in-a-directory-with-a-kqueue
notification-of-changes-to-the-iphones-documents-directory
Can I use KVO to implement this type of behavior.
Your problem is a design issue. Your download operation is salient to multiple view controllers. The download is started and owned by a view controller that can go out of scope before the download completes.
Change the download operation to a higher level controller object (like your app delegate). That way controllers can initiate downloads or view status of downloads. You may want to define a protocol that interested view controllers implement to allow you to call specific methods on those view controllers.
Dispatch messages about your download operation to interested view controllers from that higher level controller.

iOS - Can UIGraphicsGetCurrentContext be used outside drawRect?

I want to dynamically change the current CGContextRef according to different user actions? Is this possible or is its modification only possible within drawRect: of a view instance? What happens when I call UIGraphicsGetCurrentContext() outside drawRect: and are there any limitations in doing so, is this recommended? Any possible implications I need to consider?
According to the docs the graphics context is only set just before this function is called. This means that if this function is not called it won't be set and if you don't make the system call it again (never do this yourself for that reason) it won't be there either.
Use one of these functions to force the view back into drawRect:
setNeedsDisplay:
setNeedsDisplayInRect:
It doesn't mean you can only do stuff inside drawRect however. This context is sort of globally available at that moment and you can call clean separate functions or even classes for drawing things. Passing the reference to those functions is a clean way to do it.

Dealing with hierarchy-breaking effects in iOS games and apps

I started working as a iOS developer about a year and a half ago, and I'm having some trouble with software architecture and organization. I use Apple's recommended Model-View-Controller paradigm, and my code is generally very hierarchical: if a screen has (for example) a HUD, a control panel, and a display area, I have a main controller for the screen and sub-controllers for the HUD, control panel, and display area. The sub-controllers generally have no knowledge of their neighboring controllers and use methods in the main controller to interact with them.
However, especially in games, I often run into hierarchy-breaking problems that just can't be elegantly solved with this model. For instance, let's say I have a coin in the control panel area that I want to animate flying to the HUD. I can either animate the original coin to the new position, which would require a method like animateCoinToPosition: in the control panel sub-controller and a method like getPositionForFinalCoinPositionInHUD in the main controller; or, I can hide the original coin and create a duplicate coin either in the main controller or the HUD controller, which would require a delegate method like animateCoinToHUDFromStartingPosition:. I don't like having such oddly-specific methods in my controllers, since they only really exist to solve one problem, and additionally expose the hierarchy. My ideal solution would be to have a single method called animateCoinToHUD, but this would require flattening the entire hierarchy and merging the three controllers into one, which is obviously not worth it. (Or giving the sub-controllers access to their siblings — but that would essentially have the same effect. The sub-controllers would then have dependencies with each other, creating a single messy spiderweb controller instead of a main controller and three mostly independent sub-controllers.)
And it often gets worse. What if I want to display a full-screen animation or particle effect when moving the coin? What if my coin is a lot more complicated than a simple sprite, with many subviews and details, to the point where creating a duplicate coin using animateCoinToHUDFromStartingPosition: is inefficient? What if the coin flies to the HUD but then returns to the control panel? Do I "lend" the coin view to the main controller and then take it back when the animation completes, preserving the original position/z-order/etc. in temporary variables so that they can be restored? And another thing: logically, code that concerns multiple sub-controllers belongs in the main controller, but if these interactions are common, the main controller grows to be many thousands of lines long — which I've seen happen in many projects, not just my own.
Is there a consistent way to handle these hierarchy-breaking effects and actions that don't require duplicate code or assets, don't bloat my controllers, and elegantly allow me to share objects between sub-controllers? Or am I using the wrong approach entirely?
So, I think you may be thinking about the "never go up" the hierarchy a little too literally.
I think the idea is that you don't know specifically what the parent is, but you can define a protocol and know that whatever your parent object is it responds to said protocol. You ideally test in code to confirm that it responds to that protocol. Then use the protocol to send the message in a generic way wherein you pass the coin object to the parent object and let the parent object animate it off the screen and into the HUD.
The sub-controllers then have a id<parent_protocol> parent; instance variable and their initialization method takes one of those as a parameter. Given your description you already have something like this in place, or at least enough to implement "sub-controllers generally have no knowledge of their neighboring controllers and use methods in the main controller to interact with them" as you say.
So the idea, from a design perspective is that the coin pickup happens in the Display panel and all it knows is that it's parent object has a pickupCoin: method that will do whatever is appropriate with a picked up coin. The Display panel doesn't know it goes to the HUD, or whatever, just that picked up coins get handled by the parent controller's pickupCoin: method.
The OOP design philosophy here is that all knowledge of the parent is encapsulated in the protocol definition. This makes the child & parent more loosely coupled so that you could swap in any parent that implemented that protocol and the children would still work fine.
There are looser couplings you could use (globally posted notifications say), but in the cases you describe I think something like what I've outlined is probably more appropriate & likely more performant.
does that help?

Resources