I have a custom view which has a piano keyboard drawn inside of it. Each key is drawn as a separate call, so that I can only draw keys that need redrawing. The view supports multiple touch, so multiple keys can be held down at once.
Each key is somewhat expensive to draw, so I pass a specific region to setNeedsDisplay whenever a touch is detected on the view in order to avoid re-drawing the entire view (which produces noticeable lag).
In order to handle multiple touches, I iterate over the set of received touches, check if each touch is within one of the keys, and if so, update it and call setNeedsDisplay with the rectangle of that key. In short, setNeedsDisplay is called multiple times in one function, but with a different rect each time.
The behavior that I expected was that drawRect would be invoked multiple times with different dirty regions, however, it seems that if I press the far left and far right keys at the same time, the entire view is redrawn, rather than just the far left and far right keys (as in, all the keys in between are redrawn unnecessarily).
What can I do to achieve what I want? I want to just draw the keys that are touched, not all the keys in between the two dirty keys.
The system will send you one drawRect: message per turn of the main run loop, regardless of how many times you called setNeedsDisplayInRect:. It passes you a rect that is at least the "union" of all of the dirty rects you passed to setNeedsDisplayInRect:. The system provides no way to find out exactly which rects were passed to setNeedsDisplayInRect:.
You could override setNeedsDisplayInRect: to keep an array of dirty rects (you will find +[NSValue valueWithCGRect:] useful), and clear out the array in drawRect:.
You could create your own setNeedsDisplayForKey: method that keeps an array of dirty keys and calls setNeedsDisplay.
I ran into this and posted my question here.
Curiously, I was also drawing a keyboard (with 88 keys). I never solved it and decided that I would deal with it if it turned out to be a performance problem (don't optimize until you need to etc.). One thing I did do was at start up, rendered that default keyboard into an image and use that as a base, so that I was only drawing the keys which were depressed and not the entire keyboard. Faster to draw an image than all that CGPath stuff.
I was displaying midi notes as they played and performance was fine - so maybe you don't need to be concerned # this right now.
Related
I'm coding in Swift 2.0 for devices running iOS7+.
Is it possible to present a tableview in a skewed/diagonal/slanted format as indicated below?
Obviously if the answer is yes, what process would I need to go through to get the result?
Yes it's possible. Views in iOS have a transform property, of type CGAffineTransform. You can use that to make the view appear skewed. I don't know offhand how to create a transform that creates the skewing effect. I suggest doing some google searching.
The next issue you will face is interacting with taps. Changing the transform of a view does not transform the coordinate system applied to taps, so taps will still land on the non-skewed views. That will be much harder to sort out, and without doing a fair amount of research I don't have an answer for you on that one. (It would probably be possible to intercept touch events before they get to your table view and apply the inverse of your skewing transform to them so that you map the taps back to the rectangular coordinate system the table view is expecting.)
I have over 150 UIImageViews. I need them to behave like a large ballet. I also need to be able to dynamically choreograph them. Some spin, some duck, some jump (each currently a different UIImageView subclass).
I'm assuming I need to put them all into an NSDictionary, then grab the object based on a key and say "you spin", "you jump", "you duck". And do those moves in sync while the music is playing (no real music, but instead data coming in from an external source). And I have no idea what that music(data) will be until it arrives, and no one's heard the song before. I've setup a while (music){} loop with a large switch statement inside.
I'd like to place all the ballerina's on the view in the nib by hand so that they line up correctly with the subview. None of them change x,y. They might only change z, and maybe swap an image, fade opacity, etc (which I was calling spin, duck and jump). And there is no user interaction here, you're just in the audience.
I'm also assuming I'll need to use the UIView's beginAnimation, setAnimation, commitAnimation methods.
Am I on the right track? What's the best way to achieve this? Any optimizations I should consider?Apologies for all the analogies, it's the easiest way to explain what I'm trying to achieve.
I recommend you look at UICollectionView. There was a great WWDC session about it. I'm not exactly sure what you're trying to achieve, but with UICollectionView you can set up custom layouts and animate between different layouts automatically. So if your images have to move in sync, this might be a good option.
See UPDATE below:
I am confused about UIView layout as subviews are moving. I have a "Surface" UIView with several "Item" subviews on it. The user is allowed to click on the items and drag them around the surface.
What I have noticed though is that whenever a Surface's Item subview moves (is dragged by the user), the Surface is marked as needing to be relayed out.
I do not want the Surface to layout the Items after the user moved them. That is pretty much the whole point, I am allowing the user to position them as they see fit. However, there are times, initial creation is a prime example, where I do need the Surface to place the items.
So I would like one of two things:
A) Suppress SetNeedsLayout() calls on the surface when one of its Item subviews changes (moves).
--OR --
B) Know why I was asked to relayout, and if it was caused by Item motion then do nothing.
I cannot imagine I am the first to have this question.... :)
###### UPDATE:
After more investigation, I discovered more about what is going on. It is not that moving the Surface's items causes a Surface relayout, as I originally thought. It was only the initiation of a drag which caused the relayout. In digging further I discovered that it wasn't even the drag that was the cause, but a call to the Surface's bringSubviewToFront.
I need to bring the Item to the front, so that when it is dragged it appears on top of the others.
I can understand why bringing a subview might trigger a relayout, but again it is not what I want to happen.
May be you should override layoutSubviews in your Surface UIView.
Imagine a view with, say, 4 subviews, next to each other but non overlapping.
Let's call them view#1 ... view#4
All 5 such views are my own UIView subclasses (yes, I've read: Event Handling as well as iOS Event Guide and this SO question and this one, not answered yet)
When the user touches one of them, UIKit "hiTests" it and delivers subsequent events to that view: view#1
Even when the finger goes outside view#1, over say view#3.
Even if this "drag" is now over view#3, view#1 still receives touchesMoved, but view#3 receives nothing.
I want view#3 to start replying to the touches. Maybe with a "touchedEntered" of my own, together with possibly a "touchesExited" on view#1.
How would I go about this?
I can see two approaches.
side step the problem and do all the touch handling in the parent
view whenever I detect a touchesMoved outside of view#1 bounds or,
transfer to the parent view telling it to "redispatch". Not very
clear how such redispatching would work, though.
For solution #2 where I am getting confused is not about the forwarding per se, but how to find the UIVIew I want to forward to. I can obviously loop through the parent subviews until I find one whose bounds/frame contain the touch, but I am wondering if I am missing something, that Apple would have already provided but I cannot relate to this problem.
Any idea?
I have done this, but I used CALayers instead of sub-UIViews. That way, there is no worries about the subviews catching/redispatching events to the parent UIView. You might not be able to do that, but it does simplify things. My solution tended to use CGRectContainsPoint() a lot.
You may want to read Event Handling again, as it comes pretty close to answering your question:
A touch object...is associated with its hit-test view for its
lifetime, even if the touch represented by the object subsequently
moves outside the view.
Given that, if you want to accomplish your goal of having different views react to the user's finger crossing over them, and if you want to do it within the touch-handling mechanism provided by UIView, you should go with your first approach: have the parent view handle the touch. The parent can use -hitTest:withEvent: or -pointInside:withEvent: as it's tracking a touch to determine if the touch is in one of the subviews, and if so can send an appropriate message.
Suppose I have a VFM full of both focusable and non-focusable fields. Since most of them are spread out far apart, the movement from one focused field to another is jerky at best, even with NullFields in between. In other words, it just sets the current y position to the next focused field without smoothly scrolling the screen.
What I want to achieve is to be able to scroll at a fixed rate between fields, so it it doesn't just focus from one to another that instantaneously. After reading up on how to do this, it is a matter of overriding moveFocus and setting it via a TimerTask from an accessor method to set moveFocus, as per this link. However, I haven't seen a practical implementation of how to do this, complete with the routines that are called in the TimerTask's thread.
Is there any way to achieve this type of behavior?