Square Cash Label Animation - ios

Does anyone know how Square Cash animates their label?
The label does two things, appears to resize to fit the numbers on screen like SizeToFit might, but I don't believe that you can animate based on SizeToFit.
Secondly, numbers that disappear seem to animate downwards and disappear. Numbers that are entered animate down from above. That doesn't seem too tricky, but the comma does it too when we go from 4 digits to 5 digits!

I coded something similar and yes, it is very tricky to make it perfect.
It may help you to know that I used collection views. Then you can customise cell/layout transitions.
Hope this helps.

It would be helpful if you posted a short video so we could see the animation you are talking about.
Based on your description, I'm guessing that they build the full number themselves by putting a single digit/symbol on a layer (or view) and then animating each character separately.
If you have a separate tile for each symbol it is pretty easy to change the size of the previous tiles to make room for a new tile, and animate a new number tile down at the same time. You could do the animation with UIView animation or with a set of coordinated CABasicAnimations.

I know the answer is late but may be help to some other,
I developed demo screen similar to square cash you can check here

Related

Strategy for scrolling a small area of content in SpriteKit

I'm creating an adventure game in Swift and allow the player to view their inventory. I have a very small set of items you can acquire (only about 25 items for your inventory) and I'd like them to display about 5-6 at a time in a rectangle. My thought was the player can scroll through them by swiping horizontally, and it will take them through the whole list, only ever showing 5-6 at a time across. The entire area is roughly 1/4 of the size of the screen.
I was looking at something like this https://github.com/crashoverride777/Swift-SpriteKit-UIScrollView-Helper but when I tried it, it seems to be suited to a giant area (the entire screen) and the items then scroll off the screen when you scroll. I played with the content size thinking of it as a "viewport" but didn't have ay luck.
In my case, I want the items to scroll only within the confines of a
300 x 150 rectangle or so. (so the item does not go beyond the width of the box containing it).
I couldn't really figure out a reliable way of doing this and wanted to ask someone if they've done something similar and how they achieved it. What's a good strategy for this? Perhaps a camera + pan using SKCameraNode?
Thanks so much!
I think I can do it using a cropping mask - an initial test worked. Let me post something once I can figure it out but wanted to let anyone know in case they were wondering.

Specific position of WKInterfaceImage alignment (Watch OS 2)

I just bought the Apple Watch and I want to create a super simple game for it.
It is simple enough to only use monochrome color scheme, but advanced enough to have an object moving in real-time.
I am trying to figure out how to position an object on my Apple Watch with Watch OS 2.
I want to place my object somewhere on the screen (anywhere I'd like to) but there are absolutely
no way to do that, I think.
But, in the following library: https://github.com/shu223/watchOS-2-Sampler
the developer can actually animate the alignment of the image so I guess
that itself suggests it should be possible to somehow specify a point of where to position an object.
And the animation is pretty smooth as well.
I have tried to generate frames on-the-fly CGContext which uses Quartz 2D, but it's way too slow and the app on my Apple Watch just crashes.
I don't necessarily think that the watch itself is too weak, but some clever programming should solve my problem, but I just cant figure out how to do it.
As you already said there is no way to directly position an WKInterfaceObject on your watch. You can change the alignment of an object but that leaves exactly 3 positions: left, center and right. Probably not enough for your game.
What you could do: You can set a background image on a WKInterfaceGroup. So you could draw your game objects into an UIImage and set that as background image of that group. That background images can also be animated. So maybe you can draw the movement of your game object into several UIImages and then set those as animated background image.
One way that I have found to have a little more control over where a WKInterfaceObject is on the screen is to add it to a parent WKInterfaceGroup, and center your item (vertically and/or horizontally depending on how you want to move it). Then you can tweak the group's relative height / width (setRelativeHeight and setRelativeWidth) and it's alignment (setVerticalAlignment and setHorizontalAlignment) to get your item positioned where you want.
This doesn't give you the ability to tweak frames or give the item a particular coordinate, but it does give you the ability to programmatically move a WKInterfaceObject anywhere on the screen.
For example if you want your object in the exact center of the screen vertically, you leave the group's relative height to its container as 1, with your item centered. To get the object at 40% from the top, you can change the group's relative height to 0.8 and it's alignment to .Top. If you want it 60% from the top, the relative height would still be 0.8, but the group's vertical alignment should be .Bottom etc.
I know this is kind of a confusing way to have to accomplish this, but this worked for me in getting items positioned exactly how I wanted.

Apply definitely CGAffineTransform* to a UIView

I'm having a problem with scale transformation I have to apply to UIViews on Swift (but it's the same in objective-c too)
I'm applying a CGAffineTransformMakeScale() to multiples views during a gestureRecognizer.
It's like a loop for a cards deck. I remove the one on top and the X others behind scale up and a new one is added in the back.
The first iteration works as expected. But when I try to swipe the new front one, all the cards reset to their initial frame size because i'm trying to apply a new transform, which seems to cancel the previous one and reset the view to its initial state.
How can I apply definitely/commit the first transform change to be able to apply a new one after that based on the UIView resulting new size ?
I tried a UIView.commitAnimations() but no change.
EDIT :
Here's a simple example to understand what I try to do :
Imagine I have an initial UIView of 100x100
I have a shrink factor of 0.95, which means next views behind will be 95x95, then 90.25, then 85.73, etc
If I remove the top one (100x100), I want to scale up the others, so the 95x95 will become 100x100, etc
This is done by applying the inverse of the shrink factor, here 1.052631...
First time I apply the inverse factor, all views are correctly resized.
My problem is, when I trigger again by a swipe on the new front UIView a new resize of all views (So, for example, the 90.25x90.25 which became 95x95 should now scale to 100x100).
At this moment, the same CGAffineTransformMakeScale() is apply to all views, which all instantly reset to their original frame size (so the now 95x95 reset to 90.25x90.25, and then begin to apply the transformation on this old size).
As suggested here or elsewhere, using UIView.commitAnimations() in the end of each transformation don't change anything, and using a CGAffineTransformConcat() is like powering over and over the scaling by himself and of course views become insanely big...
I hope I made myself more clear, that's not easy to explain, don't hesitate to ask if something is wrong here.
After a lot of reading and consulting colleagues who know better than me about iOS programmation, here's my conclusion :
Applying a CGAffineTransformMakeScale() only modify visually a view but not its properties, and since it's difficult (and costly) to modify afterward the bounds and/or frame of a view, I should avoid to try to make a transform, update bounds, make another transform, etc.
Applying the same CGAffineTransformMakeScale() only reset the effect and not apply to the previous one.
Applying a CGAffineTransformScale() with the same values on top of the previous CGAffineTransformMakeScale() (or with a CGAffineTransformConcat()) has some unpredictable effect and will be very difficult to calculate precisely the new values to apply each time to get the effect I want.
The best way I can go with this is only applying one CGAffineTransformMakeScale() that I will keep updating scales values all along the view's life.
It implies now for me to rework all my implementation logic in reverse, but that's the easiest way to do this right.
Thanks all for your tips.

handling finger detection on small objects

The application I am working on requires a 4px bar height with a full screen size width. I need to be able to select this 4px bar and move it around. I also can not change the size of this bar it has to be 4px in height.
This wouldn't be that big of an issue if I wasn't using OpenGL to create the object. OpenGL obviously does not have its own selection features so I am needing to program my own.
Initially after research I built a color selector to identify the object. How my color selector works is what ever x and y my finger touch returns from touchesBegan: is the pixel I grab from a screenshot of the OpenGL View. The issue with this is finger location is not precise at all. If I use the mouse it works perfect...
I decided to maybe loop through a buffer zone of the selected x and y but unfortunately a screenshot of the OpenGL view has antialiasing happens to the image when it's stored in memory and the buffer returns several shade of my objects color. I could possibly do a comparative color look up, to see if its in the range of colors but that seems overly complicated with how much I have already had to do. Plus cycling through the buffer zone isn't quick.
I also have thought maybe just remembering the location of my line on the screen and if my finger is close to that location just know that that's the one I want to select and move it around.
The future of this application can have up to 4 lines just like this so, I want something more secure then just knowing the location of where it is in memory.
What better way is there out there of handling selection of small objects?
How about maintaining an array of frames for the four objects, but expand the heights to something more manageable (8px or bigger)? Then, a touch within the larger region could be compared against the array (using CGRectContainsPoint). If you get a hit, then "snap to" the center point of the smaller (4px) rectangle before beginning the drag.
I do something like this by maintaining a list of "drop targets" for drag & drop, where it snaps to the drop target when it gets pretty close. Don't know if I'm conveying the idea very well, but it ought to work.
If the four 4px rectangles are going to be contiguous or very close together, you'll have to be able to make the selected one stand out or the user won't be able to tell which they're dragging -- but you could do that by making it bigger (maybe 6-8 px) then bringing it to the front so it overlays its adjacent neighbors.
More of an idea than an answer I guess.
John,
I would suggest a different approach. As you've discovered, touches in iOS are very imprecise. Apple usually suggests that the "hit box" for your controls be at least 40x40 points. I've gone as small as 30x30 points, but that starts to get hard.
What I would suggest you do is to factor your code so the app knows where the line is, and keeps track of it as a logical object. Then in your touch handler, interpret touches based on a large "buffer area" around the things you want the user to be able to move. If you just have a single horizontal bar, this should work great. Where you'll get into trouble is if you have multiple, thin horizontal bars that are close together. In that case you might need to rethink your app design and find another way to solve the problem.
As for the implementation details, you might add a pan gesture recognizer to your OpenGL view, and have it notify the OpenGL view of touch and drag actions. Then your OpenGL view can use knowledge of where your draggable objects are to decide how to interpret the touches.

Image processing - Music note blocks detection

I have a music note sheet (like this for example)
, I want to detect the position of each block in it and then detect the vertical lines that split it, To be like this.
Can you help me how can I do this ?
Thanks in advance
You can use a Hough transform to detect lines in the image. For horizontal ones this should give you five at a time which are the staves and vertical ones may need a bit more processing to figure out what are stems and what are bar lines.

Resources