iOS drag on UIView, cloth effect - ios

Since I saw this menu drag concept, I have really been interested to find out how to accomplish it.
So I am wondering how I would go about dragging with a cloth-effect in a UIView?
I know how to drag items, but how do you give them the ripple effect?
(Better image: http://dribbble.com/shots/899177-Slide-Concept/attachments/98219)

In short: it’s really, really hard. The old Classics app achieved something along those lines using a series of pre-rendered smooth paper images under a simple transform of their view content, but as you can see from those screenshots (and the one below—note that the text at the bottom is still making a straight line, since it’s getting a basic perspective transform), the effect was fairly limited.
The effect shown in that Dribbble design is much more complicated, since it’s actually doing a scrunching-up warp of the view’s content, not just skewing it as Classics did; the only way I can think of to do that exact effect on iOS at present would be to drop into OpenGL and distort the content with a mesh there.
A simpler option would be to use UIPageViewController, which will at least you the nice iBooks-style curling paper effect—it ain’t fabric, but it’s a lot easier than the GL option.

There is an open source reimplementation of this already.
This blog post: Mesh Transforms covers the private CAMeshTransform. Rather than treating a CALayer as a simple quad, it allows CALayers to be turned into a mesh of connected faces. This class is how Apple has been able to implement the page curl and iBooks page turning effects.
However, the API doesn't tolerate malformed input at all well and Apple has kept it a private API.
If you keep reading that blog post though you'll come to this section just after the bit about it being private API.
In the spirit of CAMeshTransform I created a BCMeshTransform which copies almost every feature of the original class.
...
Without direct, public access to Core Animation render server I was forced to use OpenGL for my implementation. This is not a perfect solution as it introduces some drawbacks the original class didn’t have, but it seems to be the only currently available option.
In effect he renders the content view into an OpenGL texture and then displays that. This lets him mess around with it however he likes.
Including like this...
I encourage you to check out the demo app I made for BCMeshTransformView. It contains a few ideas of how a mesh transform can be used to enrich interaction, like my very simple, but functional take on that famous Dribbble.
What famous Dribbble? This one:
Here is what the example looks like:
Open source project: https://github.com/Ciechan/BCMeshTransformView
Example Implementation of the curtain effect: BCCurtainDemoViewController.m
How does it work?
It sets the BCMeshTransformView up with some lighting and perspective.
// From: https://github.com/Ciechan/BCMeshTransformView/blob/master/Demo/BCMeshTransformViewDemo/BCCurtainDemoViewController.m#L59
self.transformView.diffuseLightFactor = 0.5;
CATransform3D perspective = CATransform3DIdentity;
perspective.m34 = -1.0/2000.0;
self.transformView.supplementaryTransform = perspective;
Then using a UIPanGestureRecognizer it tracks the touches and uses this method to build a new mesh transform every time the users finger moves.
// From: https://github.com/Ciechan/BCMeshTransformView/blob/master/Demo/BCMeshTransformViewDemo/BCCurtainDemoViewController.m#L91
self.transformView.meshTransform = [BCMutableMeshTransform curtainMeshTransformAtPoint:CGPointMake(point.x + self.surplus, point.y) boundsSize:self.transformView.bounds.size];
// From: https://github.com/Ciechan/BCMeshTransformView/blob/master/Demo/BCMeshTransformViewDemo/BCMeshTransform%2BDemoTransforms.m#L14
+ (instancetype)curtainMeshTransformAtPoint:(CGPoint)point boundsSize:(CGSize)boundsSize
{
const float Frills = 3;
point.x = MIN(point.x, boundsSize.width);
BCMutableMeshTransform *transform = [BCMutableMeshTransform identityMeshTransformWithNumberOfRows:20 numberOfColumns:50];
CGPoint np = CGPointMake(point.x/boundsSize.width, point.y/boundsSize.height);
[transform mapVerticesUsingBlock:^BCMeshVertex(BCMeshVertex vertex, NSUInteger vertexIndex) {
float dy = vertex.to.y - np.y;
float bend = 0.25f * (1.0f - expf(-dy * dy * 10.0f));
float x = vertex.to.x;
vertex.to.z = 0.1 + 0.1f * sin(-1.4f * cos(x * x * Frills * 2.0 * M_PI)) * (1.0 - np.x);
vertex.to.x = (vertex.to.x) * np.x + vertex.to.x * bend * (1.0 - np.x);
return vertex;
}];
return transform;
}

Related

How to apply different easing effects to sprite action?

I use a lot of CCEase* functionalities in Cocos2D described here. iOS 7 Sprite Kit also have SKActionTimingMode. However only simple modes. How can I get CCEaseElasticIn or CCEaseBounceIn like effects using Sprite Kit?
Sprite Kit left easing (or tweening) intentionally limited with the expectation that the developer would take control of the specifics of the motion of the sprites. Basically, what you need to do is make a custom action and apply an easing curve to the parameter before changing the property (rotation, position, scale, etc) of the sprite. Here's an example.
CGFloat initialScale = mySprite.xScale;
SKAction *scaleAction = [SKAction customActionWithDuration:duration actionBlock:^(SKNode *node, CGFloat elapsedTime) {
CGFloat t = elapsedTime/duration;
CGFloat p = t*t;
CGFloat s = initialScale*(1-p) + scale * p;
[node setScale:s];
}];
[mySprite runAction:scaleAction];
The part of this that determines the easing is p = t*t. So, p is a function of t such that :
when t is 0, p is 0
when t is 1, p is 1
That means that you will start at the beginning and end at the end but the shape of the curve in between will determine how you get there. Easing functions can be simple, like the one shown here, which is basically an ease-in, or quite complex such as elastic or bounce. To generate your own, try this : http://www.timotheegroleau.com/Flash/experiments/easing_function_generator.htm
Or take a more detailed look at Robert Penner's equations: http://www.robertpenner.com/easing/
For arbitrary easing, Kardasis' answer says it all.
If you're looking for an easy way to add a bouncing effect to your animations, that is consistent with the way things are done in UIKit, I have something for you.
Apple introduced spring animations in UIKit a couple years ago, by letting you set a spring damping and initial velocity when performing a UIView animation. Unfortunately they didn't implement that in SpriteKit, so I made my own library that does just that.
It's a set of extensions on SKAction that replicate most factory methods, adding the damping and velocity parameters.
The code is on GitHub, feel free to use it: https://github.com/ataugeron/SpriteKit-Spring

Rotate a UIView in 2D, not 3D

When I use either of the following pieces of code, the button rotates in 3D rather than 2D (flat against the screen). How can I avoid the 3D behavior? Here's what the button looks like during the rotation:
CGAffineTransform rotationTransform = CGAffineTransformIdentity;
rotationTransform = CGAffineTransformRotate(rotationTransform, (offset/(180.0 * M_PI)));
button.transform = rotationTransform;
.
button.transform = CGAffineTransformRotate(CGAffineTransformIdentity, (offset/(180.0 * M_PI)));
.
button.transform = CGAffineTransformMakeRotation(offset/(180.0 * M_PI));
Here's the answer from an Apple employee:
When you set a transform the frame becomes less well-defined, to the
point that you should no longer use it to size or position a view. Try
using the center property instead, your probably not getting a 3D
rotation, but a distortion due to the compounding effect of setting
the frame and trnsform.
I met the same problem which is really annoying, and I got it settled at last, though still have no idea why.
I got unexpected 3D rotations when I tried to perform a 2D rotation in "viewDidLayoutSubviews" method with
double radius = (degree / 180.0) * M_PI;
uiImageView.transform = CGAffineTransformMakeRotation(radius);
My solution is to perform transforming elsewhere other than "viewDidLayoutSubviews" and it is working perfectly.
Interestingly, when the annoying 3D rotation happens, and I keep orienting my iPhone for a while, the 3D rotation will disappear suddenly and 2D rotation will come back and take the place, but the image stays at a distorted shape. I guess maybe iOS is doing some internal transforming in "viewDidLayoutSubviews" and things get messed up if we impose additional transforming on it. However, my guess cannot explain why it disappears after a while. I also tried to NSLog the members of "uiImageView.transform", say, "a, b, c, d, tx, ty" in "viewDidLayoutSubviews" but got no clue.
SOLUTION: Check your code to see if you are performing the rotation in "viewDidLayoutSubviews" or some similar method, and if yes, try to move it out to elsewhere and cross fingers.

UI equivalent of setFrameCenterRotation?

I have been coding for MacOs for months and now exploring iOS. It's a bit confusing.
In MacOS, I have a NSImageView I can rotate using setFrameCenterRotation. One outlet, one call, and the work is done.
I suppose there is an equivalent in iOS but I could only find code examples using animation and a lot of calculation (certainly to make "nice" game-like features on this tiny screen).
I don't want any animation, just my ...UIImageView to rotate in one block from a given number of degrees.
Can it be done?
Any suggestion really welcome!
Regards,
B.
You probably want to set the view's transform property to a rotation transform. Example:
CGFloat radians = degrees * M_PI / 180;
myImageView.transform = CGAffineTransformMakeRotation(radians);
UIImageView inherits the transform property from UIView, so you will need to consult the UIView Class Reference for documentation.

iOS CGPath Performance

UPDATE
I got around CG's limitations by drawing everything with OpenGL. Still some glitches, but so far it's working much, much faster.
Some interesting points :
GLKView : That's an iOS-specific view, and it helps a lot in setting up the OpenGL context and rendering loop. If you're not on iOS, I'm afraid you're on your own.
Shader precision : The precision of shader variables in the current version of OpenGL ES (2.0) is 16-bit. That was a little low for my purposes, so I emulated 32-bit arithmetics with pairs of 16-bit variables.
GL_LINES : OpenGL ES can natively draw simple lines. Not very well (no joints, no caps, see the purple/grey line on the top of the screenshot below), but to improve that you'll have to write a custom shader, convert each line into a triangle strip and pray that it works! (supposedly that's how browsers do that when they tell you that Canvas2D is GPU-accelerated)
Draw as little as possible. I suppose that makes sense, but you can frequently avoid rendering things that are, for instance, outside of the viewport.
OpenGL ES has no support for filled polygons, so you have to tesselate them yourself. Consider using iPhone-GLU : that's a port of the MESA code and it's pretty good, although it's a little hard to use (no standard Objective-C interface).
Original Question
I'm trying to draw lots of CGPaths (typically more than 1000) in the drawRect method of my scroll view, which is refreshed when the user pans with his finger. I have the same application in JavaScript for the browser, and I'm trying to port it to an iOS native app.
The iOS test code is (with 100 line operations, path being a pre-made CGMutablePathRef) :
- (void) drawRect:(CGRect)rect {
// Start the timer
BSInitClass(#"Renderer");
BSStartTimedOp(#"Rendering");
// Get the context
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 2.0);
CGContextSetFillColorWithColor(context, [[UIColor redColor] CGColor]);
CGContextSetStrokeColorWithColor(context, [[UIColor blueColor] CGColor]);
CGContextTranslateCTM(context, 800, 800);
// Draw the points
CGContextAddPath(context, path);
CGContextStrokePath(context);
// Display the elapsed time
BSEndTimedOp(#"Rendering");
}
In JavaScript, for reference, the code is (with 10000 line operations) :
window.onload = function() {
canvas = document.getElementById("test");
ctx = canvas.getContext("2d");
// Prepare the points before drawing
var data = [];
for (var i = 0; i < 100; i++) data.push ({x: Math.random()*canvas.width, y: Math.random()*canvas.height});
// Draw those points, and write the elapsed time
var __start = new Date().getTime();
for (var i = 0; i < 100; i++) {
for (var j = 0; j < data.length; j++) {
var d = data[j];
if (j == 0) ctx.moveTo (d.x, d.y);
else ctx.lineTo(d.x,d.y)
}
}
ctx.stroke();
document.write ("Finished in " + (new Date().getTime() - __start) + "ms");
};
Now, I'm much more proficient in optimizing JavaScript than I am at iOS, but, after some profiling, it seems that CGPath's overhead is absolutely, incredibly bad compared to JavaScript. Both snippets run at about the same speed on a real iOS device, and the JavaScript code has 100x the number of line operations of the Quartz2D code!
EDIT: Here is the top of the time profiler in Instruments :
Running Time Self Symbol Name
6487.0ms 77.8% 6487.0 aa_render
449.0ms 5.3% 449.0 aa_intersection_event
112.0ms 1.3% 112.0 CGSColorMaskCopyARGB8888
73.0ms 0.8% 73.0 objc::DenseMap<objc_object*, unsigned long, true, objc::DenseMapInfo<objc_object*>, objc::DenseMapInfo<unsigned long> >::LookupBucketFor(objc_object* const&, std::pair<objc_object*, unsigned long>*&) const
69.0ms 0.8% 69.0 CGSFillDRAM8by1
66.0ms 0.7% 66.0 ml_set_interrupts_enabled
46.0ms 0.5% 46.0 objc_msgSend
42.0ms 0.5% 42.0 floor
29.0ms 0.3% 29.0 aa_ael_insert
It is my understanding that this should be much faster on iOS, simply because the code is native... So, do you know :
...what I am doing wrong here?
...and if there's another, better solution to draw that many lines in real-time?
Thanks a lot!
As you described on question, using OpenGL is the right solution.
Theoretically, you can emulate all kind of graphics drawing with OpenGL, but you need to implement all shape algorithm yourself. For example, you need to extend edge corners of lines yourself. There's no concept of lines in OpenGL. The line drawing is kind of utility feature, and almost used only for debugging. You should treat everything as a set of triangles.
I believe 16bit floats are enough for most drawings. If you're using coordinates with large numbers, consider dividing space into multiple sector to make coordinate numbers smaller. Floats' precision become bad when it's going to very large or very small.
Update
I think you will meet this issue soon if you try to display UIKit over OpenGL display. Unfortunately, I also couldn't find the solution yet.
How to synchronize OpenGL drawing with UIKit updates
You killed CGPath performance by using CGContextAddPath.
Apple explicitly says this will run slowly - if you want it to run fast, you are required to attach your CGPath objects to CAShapeLayer instances.
You're doing dynamic, runtime drawing - blocking all of Apple's performance optimizations. Try switching to CALayer - especially CAShapeLayer - and you should see performance improve by a large amount.
(NB: there are other performance bugs in CG rendering that might affect this use case, such as obscure default settings in CG/Quartz/CA, but ... you need to get rid of the bottleneck on CGContextAddPath first)

XNA 2D convert a world position into screen position

Okay guys, I have spent a good two weeks trying to figure this out. I've tried some of my own ways to work this out by math alone and had no success. I also looked everywhere and have seen people recommend Viewport.Project().
This is not that simple. Everywhere I've looked, including MSDN areas and all forums, just suggest to be able to use it but as I try and use it they don't explain the matrix and values it requires to work. I have found no useful information on how to correctly use this method and it's seriously driving me insane. Please help this poor fella out.
First thing Im going to do is post my current code. I have five or so different versions none of them have worked. The closest I got was getting NAN which I don't understand. I'm trying to have text displayed on my screen based on where asteroids are and if they are very far the text will act as a guide so players can go to asteroids.
Vector3 camLookAt = Vector3.Zero;
Vector3 up = new Vector3(0.0f, 0.0f, 0.0f);
float nearClip = 1000.0f;
float farClip = 100000.0f;
float viewAngle = MathHelper.ToRadians(90f);
float aspectRatio = (float)viewPort.Width / (float)viewPort.Height;
Matrix view = Matrix.CreateLookAt(new Vector3(0, 0, 0), camLookAt, up);
Matrix projection = Matrix.CreatePerspectiveFieldOfView(viewAngle, aspectRatio, nearClip, farClip);
Matrix world = Matrix.CreateTranslation(camPosition.X, camPosition.Y, 0);
Vector3 projectedPosition = graphicsDevice.Viewport.Project(new Vector3(worldPosition.X, worldPosition.Y, 0), projection, view, Matrix.Identity);
screenPosition.X = projectedPosition.X;
screenPosition.Y = projectedPosition.Y;
if (screenPosition.X > 400) screenPosition.X = 400;
if (screenPosition.X < 100) screenPosition.X = 100;
if (screenPosition.Y > 400) screenPosition.Y = 400;
if (screenPosition.Y < 100) screenPosition.Y = 100;
return screenPosition;
As far as I know Project is the camera position. My game is 2D so vector3 is annoying I thought maybe my Z could be CameraZoom, projection might be the object we want to convert to 2D Screen, view might be the size of how much the camera can see, and the last one I'm not sure.
After about 2 weeks of searching for information with no possible improvements in code or knowledge and being more confused as I look at MDSN tutorials I decided I'd post a question because I'm extremely close to just not implementing world position convert to screen position. All help is appreciated thanks :)
Plus I'm using a 2D game and it does add confusion when most times people talk about the Z axis when a 2D game does not have a Z axis its just transforming sprites to appear like a zoom or movement. Thanks again :)
I may be misunderstanding here, but I don't think you need to be using a 3D camera provided with XNA for a 2D game. Unless you're trying to make a 2.5D game using 3D for some sort of parallax system or whatever, you don't need to use that at all. Take a look at these:
2D Camera Implemetation in XNA
Simpler 2D Camera
XNA 2D tutorials
2D in XNA works differently than 3D. You don't need to worry about the 3D viewport or a 3D camera or anything. There is no nearclipping or farclipping. 2D is well-handled in XNA and I think you are misunderstanding a bit how XNA works.
PS: You don't need to use Vector3s. Instead, use Vector2s. I think you will find them much easier to work with in a 2D game. ^^

Resources