I am working on an iOS camera based app, in which I have to capture a first point and then I need to draw the line to the current focus point to the first captured point. MagicPlan works this way.
Here is an image:
I have tried to fix a point for first point using accelerometer values and the tilted angle of the device. But, no luck so far. And how would i draw the line to the second point from the first point?
This is the code that i have tried so far:
if (self.motionManager.deviceMotionAvailable)
{
[self.motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue currentQueue]
withHandler: ^(CMDeviceMotion *motion, NSError *error) {
CATransform3D transform;
transform = CATransform3DMakeRotation(motion.attitude.pitch, 1, 0, 0);
transform = CATransform3DRotate(transform,motion.attitude.roll, 0, 1, 0);
transform = CATransform3DRotate(transform,motion.attitude.yaw, 0, 0, 1);
self.viewObject.layer.transform = transform;
}];
}
if (self.motionManager.deviceMotionActive)
{
/**
* Pulling gravity values from deviceMotion sensor
*/
CGFloat x = [self convertRadianToDegree:self.motionManager.deviceMotion.gravity.x];
CGFloat y = [self convertRadianToDegree:self.motionManager.deviceMotion.gravity.y];
CGFloat z = [self convertRadianToDegree:self.motionManager.deviceMotion.gravity.z];
CGFloat r = sqrtf(x*x + y*y + z*z);
/**
* Calculating device forward/backward title angle in degrees
*/
CGFloat tiltForwardBackward = acosf(z/r) * 180.0f / M_PI - 90.0f;
[self.lblTilForwardBackward setText:[#(tiltForwardBackward) stringValue]];
}
You have a lot of issues to resolve here. It isn't just a matter of adjusting for camera orientation as the height that the camera is being held at and position of the camera in the room are also changing. Even in MagicPlan, when the person turns around, the camera moves (rotates about the axis going through the person's head down to his feet).
There is quite a lot of algebra and rotation/translation matrix operations to work out. No one is going to do this for you. You'll have to figure it all out and derive it yourself (or look it up from old graphics text books).
I suggest doing something as straight forward and multi-step as possible (so you can debug each step along the way). Assume flat ground (indoor environment).
Get camera position/orientation/focal length from the first snapshot.
Figure out the touch point in real world Cartesian coordinates(start with video coordinates and translate via roll/pitch/yaw and ray traced projection to ground plane(using camera height).
From the focal length you can figure out the field of view and depth to center of field of view and using camera orientation and click distance from center of screen determine xyz offset from some origin (your feet maybe).
Determine and track camera position and orientation relative to that origin.
On second snapshot (or motion awake), figure out (center or touched point) distance from origin and exact xyz (as above).
Once you have those two points in xyz you can plot the line by taking the standard orthogonal projection onto the view plane. Clipping as needed in case original point is out of the FOV.
Related
I am trying to draw an image on top of another image. I have the image's size, transform and origin. My code below shows correct size and transform angle but not at the correct point.
Code:
UIGraphicsBeginImageContextWithOptions(backgroundImage.size, NO, [[UIScreen mainScreen] scale]);
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect baseRect = CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height);
[backgroundImage drawInRect:baseRect];
CGRect newRect = CGRectMake(x, y, width, height);
CGContextTranslateCTM(context, x, y);
CGContextConcatCTM(context, watermarkImageView.transform);
CGContextTranslateCTM(context, -x, -y);
[watermarkImageView.image drawInRect:newRect];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
The watermark image should be placed like this:
But currently its looking like this:
What did I miss?
Thanks in advance
EDIT
The x,y is the edge of the bounding box
Your code doesn't show what watermarkImageView.transform is and that is important because when you concat transformations, the effects of previous transformations will also effect all the following transformations.
E.g. a translation that moves the object 10 pixels along the x-axis will move the object 10 pixels to the right. However, if you first have a rotation that rotates the object by 45 degrees and then add a translation that moves 10 pixels along the x-axis, the object will not move 10 pixels to the right, it will move 10 pixels along a line that is 45 degrees rotated, which means it will move about 7 pixels up and 7 pixels to the right. That's because a rotation does not really rotate the object itself, it actually rotates the whole coordinate system which causes the object to be drawn rotated.
See this image:
Initially the translation coordinate system (red lines) matches the "real coordinate" system. But after the rotation by 45 degrees, the translation coordinate system has been rotated and now translating across the red lines moves the object diagonally.
Think about a sheet of paper and a stamp. The stamp always has the same position and the same orientation, you cannot move or rotate the stamp. But you can move and rotate the sheet of paper below the stamp! And that's what your transformations do. They transform the sheet before the stamp is pressed upon it.
For most people it is very hard to imagine the effects of transforming the whole space, it's much easier for them to think about transforming the object. The trick is: You must read your transformations in the opposite order than you wrote them. I guess what you want to do is actually:
CGContextTranslateCTM(context, x, y);
CGContextConcatCTM(context, watermarkImageView.transform);
CGContextTranslateCTM(context,
-watermarkImageView.size.with * 0.5,
-watermarkImageView.size.height * 0.5
);
Now read them in the opposite order (from bottom to top). First you center the watermark around (0,0) by moving it up half the height and left half the width. Now the center of your watermark is exactly at (0,0). Then you rotate it as desired. Finally you move it to the desired position. Of course you wrote all transformations the other way round but that's only because you are transforming the coordinate space, not the object.
Centering your watermark prior to rotation is important because rotation always rotates around (0,0) coordinates. If you'd just rotate, the rotation looks like this:
That's not what you want as it will not just rotate the object but also changes its position. If you center the image around (0,0) first, the rotation looks like this instead:
The answer to my question was
I had to translate to the centre of where I want to draw the context.
CGContextTranslateCTM(context, imageView.center.x, imageView.center.y);
Then rotate context.
CGFloat angle = [(NSNumber *)[imageView valueForKeyPath:#"layer.transform.rotation.z"] floatValue];
CGContextRotateCTM(context, angle);
Then draw
[imageView.image drawInRect:CGRectMake(-width * 0.5f, -height * 0.5f, width, height)];
I am trying to flip a scaled and rotated uiview on the horizontal axis.
Here is the code being used -
CGFloat xScale = selectedFrame.transform.a;
CGFloat yScale = selectedFrame.transform.d;
selectedFrame.layer.transform = CATransform3DScale(CATransform3DMakeRotation(M_PI, 0, 1, 0),xScale, yScale,-1);
The output of this is that the view flips properly and the original scale factor is also manitained but the rotation isnt.
Here are the images to explain the problem -
Original Image (The tiger view has to be flipped on the horizontal axis) -
Flipped image after above code (see that the scale is maintained but the rotation angle isnt and the image s flipped properly) -
Any help would be really appreciated!
To flip the image you are using a rotation of M_PI around the Y axis. To rotate the image, you need to apply another different rotation around the Z axis. These are two different transforms. You can combine them using CATransform3DConcat. Then you can scale the resulting transform.
CATransform3D transform = CATransform3DConcat(CATransform3DMakeRotation(zRotation, 0, 0, 1),CATransform3DMakeRotation(M_PI, 0, 1, 0));
[layer setTransform:CATransform3DScale(transform,xScale, yScale,1)];
The problem with your original code is that you are only applying the Y axis transform.
I'm sure there is a more elegant way to do this, but this works on my simulator.
I have two image views. The first is the blueish arrow, and the second is the white circle, with a black dot drawn to represent the center of the circle.
I'm trying to rotate the arrow so it's anchor point is the black dot in the picture like this
Right now I'm setting the anchor point of the arrow's layer to a point calculated like this
CGFloat y = _userImageViewContainer.center.y - CGRectGetMinY(_directionArrowView.frame);
CGFloat x = _userImageViewContainer.center.x - CGRectGetMinX(_directionArrowView.frame);
CGFloat yOff = y / CGRectGetHeight(_directionArrowView.frame);
CGFloat xOff = x / CGRectGetWidth(_directionArrowView.frame);
_directionArrowView.center = _userImageViewContainer.center;
CGPoint anchor = CGPointMake(xOff, yOff);
NSLog(#"anchor: %#", NSStringFromCGPoint(anchor));
_directionArrowView.layer.anchorPoint = anchor;
Since the anchor point is set as a percentage of the view, i.e. the coords for the center are (.5, .5), I'm calculating the percentage of the height in arrow's frame where the black dot falls. But my math, even after working out by hand, keeps resulting in .5, which isn't right because it's further than half way down when the arrow is in the original position (vertical, with the point up).
I'm rotating based on the user's compass heading
CLHeading *heading = [notif object];
// update direction of arrow
CGFloat degrees = [self p_calculateAngleBetween:[PULAccount currentUser].location.coordinate
and:_user.location.coordinate];
_directionArrowView.transform = CGAffineTransformMakeRotation((degrees - heading.trueHeading) * M_PI / 180);
The rotation is correct, it's just the anchor point that's not working right. Any ideas of how to accomplish this?
I've always found the anchor point stuff flaky, especially with rotation. I'd try something like this.
CGPoint convertedCenter = [_directionArrowView convertPoint:_userImageViewContainer.center fromView:_userImageViewContainer ];
CGSize offset = CGSizeMake(_directionArrowView.center.x - convertedCenter.x, _directionArrowView.center.y - convertedCenter.y);
// I may have that backwards, try the one below if it offsets the rotation in the wrong direction..
// CGSize offset = CGSizeMake(convertedCenter.x -_directionArrowView.center.x , convertedCenter.y - _directionArrowView.center.y);
CGFloat rotation = 0; //get your angle (radians)
CGAffineTransform tr = CGAffineTransformMakeTranslation(-offset.width, -offset.height);
tr = CGAffineTransformConcat(tr, CGAffineTransformMakeRotation(rotation) );
tr = CGAffineTransformConcat(tr, CGAffineTransformMakeTranslation(offset.width, offset.height) );
[_directionArrowView setTransform:tr];
NB. the transform property on UIView is animatable, so you could put that last line there in an animation block if desired..
Maybe better use much easier solution - make arrow image size bigger, and square. So the black point will be in center of image.
Please compare attached images and you understand what I'm talking about
New image with black dot in center
Old image with shifted dot
Now you can easy use standard anchor point (0.5, 0.5) to rotate edited image
I'm developing an iPhone app where the main views are presented the user on the surface of a cube. Users switch views by rotating the cube with a pan gesture.
To achieve this I am using the GKLCubeController class from this GitHub project.
In terms of adding views to a cube and rotating, it works fine. However the angular rotation of the cube doesn't map correctly to the current x position of the finger as it pans across the screen.
The problem is that the cube rotation lags behind the finger movement by about ½ second making the cube feel ‘heavy’ as illustrated in this short screencast.
The code handling the rotation is shown below:
-(void)panHandler:(UIPanGestureRecognizer*)panner{
CGPoint translatedPoint = [panner translationInView:self.view.window];
CGFloat halfWidth = self.view.bounds.size.width / 2.0;
// save our starting points
if([panner state] == UIGestureRecognizerStateBegan) {
startingX = translatedPoint.x;
if (!transformLayer) {
transformLayer = [[CATransformLayer alloc] init];
transformLayer.frame = self.view.layer.bounds;
for (UIView *viewToTranslate in views) {
[viewToTranslate removeFromSuperview];
[transformLayer addSublayer:viewToTranslate.layer];
}
// add in this new layer
[self.view.layer addSublayer:transformLayer];
}
} else if([panner state] == UIGestureRecognizerStateEnded) {
...
} else {
// instantly adjust our transformation layer
CATransform3D transform = CATransform3DIdentity;
transform.m34 = kPerspective;
double percentageOfWidth = (translatedPoint.x - startingX) / self.view.frame.size.width;
transform = CATransform3DTranslate(transform, 0, 0, -halfWidth);
double adjustmentAngle = percentageOfWidth * M_PI_2 + startingAngle;
transform = CATransform3DRotate(transform, adjustmentAngle, 0, 1, 0);
transform = CATransform3DTranslate(transform, 0, 0, halfWidth);
transformLayer.transform = transform;
finishingAngle = adjustmentAngle;
}
}
I've a suspicion the problem is something to do with the conversion of the CGPoint.x returned by UIPanGestureRecognizer translationInView: to a rotation angle. Can anyone confirm whether this is the case, and suggest what the correct maths should be for mapping the touch position x to the rotation of a cube such that the cube edge tracks the finger motion as it pans across the screen?
There are two issues here:
The major performance issue here is the way this class is performing the transform of the sides of the cube. It's giving each side of the cube a complicated transform, and then as you're dragging the cube around, it's taken the relevant sides of the cube, added them to a CATransformLayer, and performing a complicated transform upon that layer (thus, when you look at the individual sides of the cube, you're doing a transform of a transform).
I pulled out that CATransformLayer logic, and updated the transform for the individual sides, and it was dramatically more responsive.
By the way, you may might want to still employ something like this CATransformLayer logic when you animate the letting go of the rotated cube, as that's an excellent way of synchronizing the animation of the individual sides of the cube (otherwise you get some separation in the sides of the cube during the animation). But while dragging, there's too much of a performance hit.
As you continue to refine this, there are possibly other optimizations that can be done, but my testing suggests that getting rid of a transformation on a complicated transformation made a huge impact on performance.
And, by the way, make sure to test this on a device, not the simulator, as the simulator's graphics performance is very different than that of the device.
A minor factor that might contribute a slight initial delay in responsiveness may be the inherent delay in UIPanGestureRecognizer (which looks for a certain amount of movement before recognizing the gesture as a pan, so that other gestures such as taps and the like can trigger if appropriate). It's a modest delay and a very small part of your performance problem, but for the quickest of response times, you might not want to use the UIPanGestureRecognizer. Either subclass your own, or use a UILongPressGestureRecognizer with a minimumPressDuration of 0.0, and you can get instantaneous response to the gesture.
You'll see this respond more quickly to movement (but it's also a gesture that doesn't play well with others, that if you have tap gestures or the like inside the view, they won't be triggered).
I'm starting to program in IOS with Xcode, my idea is to carry a watch with hands that mark the altitude according to the GPS data received from the Iphone. I have the variable that returns me the height in meters, how do I do that to interpret it clockwise?
Set the view.transform property based on the distance.
You'll need to convert the distance into radians. Here's some code
#define METERS_IN_A_CIRCLE 100.0f // change this to get a different scale
CGFloat distance = 40.0f; // the distance in meters - set this to your variable value
CGFloat angle = (M_PI * 2.0f) / METERS_IN_A_CIRCLE * distance;
view.transform = CGAffineTransformMakeRotation(angle);
If it's turning the wrong way, use -angle in the transform instead. If it's off by 1/4 of a circle, subtract M_PI_2 from the angle.
For your watch hand to look right, the centre of the hand image will need to be in the centre of the imageView, so you may need to leave a lot of empty white-space in the image, but don't worry it won't affect performance significantly.