I am working on the app where i am developing an OpenGL object with coremotion functionality which will be rotating the object on device movement. And i am using the cmrotationmatrix of device motion to calculate rotation.
It works well on the device movement and also shows straight when the device is flat on the table.But if i change the angle of the device on start it shows bending.So is it possible to make the object to show straight at any angle of the device. Kindly response if anyone find solution.
CMDeviceMotion *deviceMotion = self.motionManager.deviceMotion;
float deviceOrientationRadians = 0.0f;
deviceOrientationRadians = M_PI_2;
GLKMatrix4 baseRotation =
GLKMatrix4MakeRotation(deviceOrientationRadians, 0.0f, 0.0f, 1.0f);
CMRotationMatrix a = deviceMotion.attitude.rotationMatrix;
deviceMotionAttitudeMatrix
= GLKMatrix4Make(a.m11, a.m21, a.m31, 0.0f,
a.m12, a.m22, a.m32, 0.0f,
a.m13, a.m23, a.m33, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f);
deviceMotionAttitudeMatrix =
GLKMatrix4Multiply(baseRotation, deviceMotionAttitudeMatrix);
Related
CGContextDrawRadialGradient produces a very visible ‘cross’ at the centre of the gradient:
Code (reduced):
- (void)drawRect:(NSRect)dirtyRect {
CGContextRef context = [[NSGraphicsContext currentContext] graphicsPort];
size_t numberOfGradientLocations = 2;
CGFloat startRadius = 0.0f;
CGFloat endRadius = 30.0f;
CGPoint centre = CGPointMake(floorf(self.bounds.size.width / 2), floorf(self.bounds.size.height / 2));
CGFloat gradientColours[8] = {0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f};
CGFloat gradientLocations[2] = {0.0f, 1.0f};
CGColorSpaceRef colourspace = CGColorSpaceCreateDeviceRGB();
CGGradientRef gradient = CGGradientCreateWithColorComponents(colourspace, gradientColours, gradientLocations, numberOfGradientLocations);
CGContextDrawRadialGradient(context, gradient, centre, startRadius, centre, endRadius, kCGGradientDrawsBeforeStartLocation);
}
This happens on both macOS and iOS. Meanwhile, the same kind of gradient renders perfectly in WebKit with CSS (so it’s not some ‘bad display’ issue).
What am I doing wrong? Is there a known way around this?
Here's what I get on iOS; I used the iPhone 7 Plus simulator at 100% so as to make every pixel visible, and it seems completely smooth:
EDIT After some experimentation, I suspect you're seeing a moiré effect caused by misalignment between points and pixels.
In my iOS project , I am using CLImageEditor. I have to add straighten effect in to that. Can any one please help me how to make "straighten" effect?
Add QuartzCore.framework
#import <QuartzCore/QuartzCore.h>
Here is some basic code for rotation, perspective and scale.
CALayer *layer = myView.layer;
CATransform3D rotationAndPerspectiveTransform = CATransform3DIdentity;
// Rotate
rotationAndPerspectiveTransform = CATransform3DRotate(rotationAndPerspectiveTransform, rotate, 0.0f, 0.0f, 1.0f);
// Perspective
// mysterious multiplier m34
// m34 = -1.0 / 500, 500 being somewhat the size of the view
rotationAndPerspectiveTransform.m34 = .02;
rotationAndPerspectiveTransform = CATransform3DTranslate(rotationAndPerspectiveTransform, 0.0f , 0.0f, translate);
// Scale
rotationAndPerspectiveTransform = CATransform3DScale(rotationAndPerspectiveTransform, scale, scale, 1.0f);
layer.transform = rotationAndPerspectiveTransform;
I am working on a 3D 1st person game, using the gyro in the phone, people can look up and down, side to side etc. I also want to add manual controls to add to this.
This code handles the gyro
GLKMatrix4 deviceMotionAttitudeMatrix;
if (_cmMotionmanager.deviceMotionActive) {
CMDeviceMotion *deviceMotion = _cmMotionmanager.deviceMotion;
GLKMatrix4 baseRotation = GLKMatrix4MakeRotation(M_PI_2, 0.0f , 0.0f , 1.0f );
// Note: in the simulator this comes back as the zero matrix.
// on device, this doesn't include the changes required to match screen rotation.
CMRotationMatrix a = deviceMotion.attitude.rotationMatrix;
deviceMotionAttitudeMatrix
= GLKMatrix4Make(a.m11, a.m21, a.m31, 0.0f,
a.m12 , a.m22, a.m32, 0.0f,
a.m13 , a.m23 , a.m33 , 0.0f,
0.0f, 0.0f, 0.0f, 1.0f);
deviceMotionAttitudeMatrix = GLKMatrix4Multiply(baseRotation, deviceMotionAttitudeMatrix);
// NSLog(#"%f %f %f %f %f %f %f %f", a.m11, a.m21, a.m31,
// a.m12 , a.m22, a.m32, a.m23, a.m33);
}
I then want to add the manual values for up/down left/up etc.
I can easily add left right
deviceMotionAttitudeMatrix = GLKMatrix4RotateZ(deviceMotionAttitudeMatrix, (self.rotateLeft / 20));
However when I add up down, it works in north/south direction but east/west direction then rotates on a different axis
deviceMotionAttitudeMatrix = GLKMatrix4RotateY(deviceMotionAttitudeMatrix, (self.rotateTop / 20));
I have tried to multiple the matrix but that also has the same issue. It feels like I need to merge them not multiple them
GLKMatrix4 leftRotation = GLKMatrix4MakeRotation((self.rotateLeft / 20), 0.0f, 0.0f, 1.0f);
GLKMatrix4 TopRotation = GLKMatrix4MakeRotation(-(self.rotateTop / 20), 0.0f, 1.0f, 0.0f);
GLKMatrix4 complete = GLKMatrix4Multiply(TopRotation, leftRotation);
deviceMotionAttitudeMatrix = GLKMatrix4Multiply(deviceMotionAttitudeMatrix,complete);
I am new in iOS OpenGL ES. And I created an OpenGL project "OpenGl Game" in Xcode 5. Then I build and run, there are 2 cubes rotating around. I read the code and do not understand why there is 2 cubes instead of 1?
Here is the code related to the cubes:
- (void)update{
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f);
self.effect.transform.projectionMatrix = projectionMatrix;
GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f);
baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f);
// Compute the model view matrix for the object rendered with GLKit
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.5f);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f);
modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix);
self.effect.transform.modelviewMatrix = modelViewMatrix;
// Compute the model view matrix for the object rendered with ES2
modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.5f);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f);
modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix);
_normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL);
_modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);
_rotation += self.timeSinceLastUpdate * 0.5f;
}
It seems that GLKit and ES2 renders a cube respectively, but I don't know why?
The "OpenGL Game" template creates a program that renders 2 cubes into the GLKView. One of those cubes is rendered using the GLKit analogs of the old OpenGL ES 1.1 fixed function pipeline, the other cube is rendered through an ES 2.0 pipeline using the fragment and vertex shaders included in the template.
I think they do both so you have a working starting point for whichever pipeline you choose to use. For my current ES 2.0 project I simply stripped out all of the code relating to the ES1.1 cube and built on the code provided for the 2.0 pipeline. If you were porting some existing ES 1.1 code into GLKit you might do the opposite and remove all the stuff relating to the ES 2.0 cube.
I'm trying to rotate an image around the center of a view using sliders, one for x, y and Z planes. I would like the image to rotate inside it's framed boundaries (700X700) but for some reason it rotates around the edges of the frame. I tried resetting the anchor point but that doesn't seem to do anything. here's the code to rotate it around the Y axis; you'll notice the anchor point section commented out - it wasn't having any affect at all. Any idea of what i'm doing wrong?
float angleYDeg = self.ySlider.value;
float angleYRad = (angleYDeg / 180.0) * M_PI;
// Disable Animation
[CATransaction begin];
[CATransaction setValue:[NSNumber numberWithBool:YES]
forKey:kCATransactionDisableActions];
// Get Layers
CALayer *containerLayer = [[self.imageView.layer sublayers] objectAtIndex:0];
CALayer *holder = [containerLayer valueForKey:#"__holderLayer"];
//CGPoint anchor = holder.anchorPoint;
//anchor.y = self.imageView.frame.size.height/2;
//holder.anchorPoint = anchor;
// Update xAngle Value
[containerLayer setValue:[NSNumber numberWithFloat:angleYRad] forKey:#"__angleY"];
// Apply rotations
CGFloat angleX = [[containerLayer valueForKey:#"__angleX"]floatValue];
CATransform3D holderTransform = CATransform3DMakeRotation( angleX, 1.0f, 0.0f, 0.0f);
holderTransform = CATransform3DRotate(holderTransform, angleYRad, 0.0f, 1.0f, 0.0f);
holder.transform = holderTransform;
[CATransaction commit];
// Update Label
self.yValueLabel.text = [NSString stringWithFormat:#"%4.0fº", angleYDeg];
I'm not sure the exact answer to your question as I can't figure out where it's going wrong in the code. By default things should be rotating around the center. I wrote a demo app that shows how to change the rotation for all the axis using a UISlider. I've posted it on github here: https://github.com/perlmunger/AllAxis.git . Here is the gist of the code, though:
- (IBAction)sliderDidChange:(id)sender
{
[self setTransforms];
}
- (void)setTransforms
{
CGFloat x = [_xSlider value];
CGFloat y = [_ySlider value];
CGFloat z = [_zSlider value];
CATransform3D transform = CATransform3DIdentity;
transform.m34 = 1.0f/-800.0f; // perspective
CGFloat rotationValue = x * kRotationMaxInDegrees;
transform = CATransform3DRotate(transform,
degreesToRadians(rotationValue), 1.0f, 0.0f, 0.0f);
rotationValue = y * kRotationMaxInDegrees;
transform = CATransform3DRotate(transform,
degreesToRadians(rotationValue), 0.0f, 1.0f, 0.0f);
rotationValue = z * kRotationMaxInDegrees;
transform = CATransform3DRotate(transform,
degreesToRadians(rotationValue), 0.0f, 0.0f, 1.0f);
[[_rotationView layer] setTransform:transform];
}
This code builds up a collective transform that you apply to the view's layer on each change to any slider.
Doing consecutive transforms can often go awry. When you transform the angleX, the AngleY transform will then be executed based on the new coordinate system. Basically all the transforms are done relative to the "model" or object.
Most people usually expect the transforms to be made relative to the camera system, or as we are looking at the view. Its likely that the results you want are going to be obtained by reversing the order of the transforms like this:
// Apply rotations
CGFloat angleX = [[containerLayer valueForKey:#"__angleX"]floatValue];
CATransform3D holderTransform = CATransform3DMakeRotation( angleYRad, 0.0f, 1.0f, 0.0f);
holderTransform = CATransform3DRotate(holderTransform, angleX, 1.0f, 0.0f, 0.0f);
holder.transform = holderTransform;
[CATransaction commit];
It might help if you think of it as a lifo system (though its not!) if you are considering all moves based on camera coordinates instead of the model's coordinates.