I am currently working on augmented reality and for that purpose I'd like to use the gyroscope and Core Motion. I've studied the Apple pARk sample code, I understand most of the maths I've spend time on reading documentation because at first glance it was not clear! Everything is fine until I try to make it work in landscape mode.
I won't explain all the theory here it would be too long. But for those who experienced it, my problem is, we take the rotation matrix of the attitude to apply this rotation to our coordinates. Ok, it is fine until here, but it seems Core Motion doesn't adapt it to Landscape Mode. I saw similar questions on this subject but it looks like no one has a solution.
So I tried to make my own, here is what I think:
Everytime we rotate the device to landscape, a rotation of +-90° is made (depending on Landscape Left or right). I decided to create a 4X4 rotation matrix to apply this rotation. And then multiply it to the cameraTransform matrix (adaption of the attitude's 3X3 CMRotationMatrix to 4X4), we obtain then the matrix cameraTransformRotated:
- (void)createMatLandscape{
switch(cameraOrientation){
case UIDeviceOrientationLandscapeLeft:
landscapeRightTransform[0] = cos(degreesToRadians(90));
landscapeRightTransform[1] = -sin(degreesToRadians(90));
landscapeRightTransform[2] = 0;
landscapeRightTransform[3] = 0;
landscapeRightTransform[4] = sin(degreesToRadians(90));
landscapeRightTransform[5] = cos(degreesToRadians(90));
landscapeRightTransform[6] = 0;
landscapeRightTransform[7] = 0;
landscapeRightTransform[8] = 0;
landscapeRightTransform[9] = 0;
landscapeRightTransform[10] = 1;
landscapeRightTransform[11] = 0;
landscapeRightTransform[12] = 0;
landscapeRightTransform[13] = 0;
landscapeRightTransform[14] = 0;
landscapeRightTransform[15] = 1;
multiplyMatrixAndMatrix(cameraTransformRotated, cameraTransform, landscapeRightTransform);
break;
case UIDeviceOrientationLandscapeRight:
landscapeLeftTransform[0] = cos(degreesToRadians(-90));
landscapeLeftTransform[1] = -sin(degreesToRadians(-90));
landscapeLeftTransform[2] = 0;
landscapeLeftTransform[3] = 0;
landscapeLeftTransform[4] = sin(degreesToRadians(-90));
landscapeLeftTransform[5] = cos(degreesToRadians(-90));
landscapeLeftTransform[6] = 0;
landscapeLeftTransform[7] = 0;
landscapeLeftTransform[8] = 0;
landscapeLeftTransform[9] = 0;
landscapeLeftTransform[10] = 1;
landscapeLeftTransform[11] = 0;
landscapeLeftTransform[12] = 0;
landscapeLeftTransform[13] = 0;
landscapeLeftTransform[14] = 0;
landscapeLeftTransform[15] = 1;
multiplyMatrixAndMatrix(cameraTransformRotated, cameraTransform, landscapeLeftTransform);
break;
default:
cameraTransformRotated[0] = cameraTransform[0];
cameraTransformRotated[1] = cameraTransform[1];
cameraTransformRotated[2] = cameraTransform[2];
cameraTransformRotated[3] = cameraTransform[3];
cameraTransformRotated[4] = cameraTransform[4];
cameraTransformRotated[5] = cameraTransform[5];
cameraTransformRotated[6] = cameraTransform[6];
cameraTransformRotated[7] = cameraTransform[7];
cameraTransformRotated[8] = cameraTransform[8];
cameraTransformRotated[9] = cameraTransform[9];
cameraTransformRotated[10] = cameraTransform[10];
cameraTransformRotated[11] = cameraTransform[11];
cameraTransformRotated[12] = cameraTransform[12];
cameraTransformRotated[13] = cameraTransform[13];
cameraTransformRotated[14] = cameraTransform[14];
cameraTransformRotated[15] = cameraTransform[15];
break;
}
}
Then just before we update all the points I do this:
multiplyMatrixAndMatrix(projectionCameraTransform, projectionTransform, cameraTransformRotated);
After that the rest of the code remains unchanged, I just want the annotation to be displayed properly in landscape orientation. For now this is the only idea I have, the rendering in landscape is not good, I move the device to the right or the left hand-side, the annotations go down or up (like it was when I didn't add this code).
Has anyone come up with a solution? I'll keep on searching, especially on the CMRotationMatrix, it doesn't seem it is a typical rotation matrix, I can't find any documentation saying precisely what are the different elements of this matrix.
I just managed to adapt this (Apple's pARk sample) to landscape (right) yesterday and would like to share the changes made. It appears to work correctly, but please call out any mistakes. This only supports landscape right but can probably be adapted easily for left.
In ARView.m,
In -(void)initialize, switch the bounds height and width
createProjectionMatrix(projectionTransform, 60.0f*DEGREES_TO_RADIANS, self.bounds.size.height*1.0f / self.bounds.size.width, 0.25f, 1000.0f);
In -(void)startCameraPreview
[captureLayer setOrientation:AVCaptureVideoOrientationLandscapeRight];
In -(void)drawRect:
//switch x and y
float y = (v[0] / v[3] + 1.0f) * 0.5f;
float x = (v[1] / v[3] + 1.0f) * 0.5f;
poi.view.center = CGPointMake(self.bounds.size.width-x*self.bounds.size.width, self.bounds.size.height-y*self.bounds.size.height); //invert x
Related
SO basically, I need performance. Currently in my job we use GDI+ graphics to draw bitmap. Gdi+ graphics contains a method called DrawImage(Bitmap,Points[]). That array contains 3 points and the rendered image result with a skew effect.
Here is an image of what is a skew effect :
Skew effect
At work, we need to render between 5000 and 6000 different images each single frame which takes ~ 80ms.
Now I thought of using SharpDX since it provides GPU accelerations. I use direct2D since all I need is in 2 dimensions. However, the only way I saw to reproduce a skew effect is the use the SharpDX.effects.Skew and calculate matrix to draw the initial bitmap with a skew effect ( I will provide the code below). The rendered image is exactly the same as GDI+ and it is what I want. The only problem is it takes 600-700ms to render the 5000-6000images.
Here is the code of my SharpDX :
To initiate device :
private void InitializeSharpDX()
{
swapchaindesc = new SwapChainDescription()
{
BufferCount = 2,
ModeDescription = new ModeDescription(this.Width, this.Height, new Rational(60, 1), Format.B8G8R8A8_UNorm),
IsWindowed = true,
OutputHandle = this.Handle,
SampleDescription = new SampleDescription(1, 0),
SwapEffect = SwapEffect.Discard,
Usage = Usage.RenderTargetOutput,
Flags = SwapChainFlags.None
};
SharpDX.Direct3D11.Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.BgraSupport | DeviceCreationFlags.Debug, swapchaindesc, out device, out swapchain);
SharpDX.DXGI.Device dxgiDevice = device.QueryInterface<SharpDX.DXGI.Device>();
surface = swapchain.GetBackBuffer<Surface>(0);
factory = new SharpDX.Direct2D1.Factory1(FactoryType.SingleThreaded, DebugLevel.Information);
d2device = new SharpDX.Direct2D1.Device(factory, dxgiDevice);
d2deviceContext = new SharpDX.Direct2D1.DeviceContext(d2device, SharpDX.Direct2D1.DeviceContextOptions.EnableMultithreadedOptimizations);
bmpproperties = new BitmapProperties(new SharpDX.Direct2D1.PixelFormat(SharpDX.DXGI.Format.B8G8R8A8_UNorm, SharpDX.Direct2D1.AlphaMode.Premultiplied),
96, 96);
d2deviceContext.AntialiasMode = AntialiasMode.Aliased;
bmp = new SharpDX.Direct2D1.Bitmap(d2deviceContext, surface, bmpproperties);
d2deviceContext.Target = bmp;
}
And here is my code I use to recalculate every image positions each frame (each time I do a mouse zoom in or out, I asked for a redraw). You can see in the code two loop of 5945 images where I asked to draw the image. No effects takes 60ms and with effects, it takes up to 700ms as I mentionned before :
private void DrawSkew()
{
d2deviceContext.BeginDraw();
d2deviceContext.Clear(SharpDX.Color.Blue);
//draw skew effect to 5945 images using SharpDX (370ms)
for (int i = 0; i < 5945; i++)
{
AffineTransform2D effect = new AffineTransform2D(d2deviceContext);
PointF[] points = new PointF[3];
points[0] = new PointF(50, 50);
points[1] = new PointF(400, 40);
points[2] = new PointF(40, 400);
effect.SetInput(0, actualBmp, true);
float xAngle = (float)Math.Atan(((points[1].Y - points[0].Y) / (points[1].X - points[0].X)));
float yAngle = (float)Math.Atan(((points[2].X - points[0].X) / (points[2].Y - points[0].Y)));
Matrix3x2 Matrix = Matrix3x2.Identity;
Matrix3x2.Skew(xAngle, yAngle, out Matrix);
Matrix.M11 = Matrix.M11 * (((points[1].X - points[0].X) + (points[2].X - points[0].X)) / actualBmp.Size.Width);
Matrix.M22 = Matrix.M22 * (((points[1].Y - points[0].Y) + (points[2].Y - points[0].Y)) / actualBmp.Size.Height);
effect.TransformMatrix = Matrix;
d2deviceContext.DrawImage(effect, new SharpDX.Vector2(points[0].X, points[0].Y));
effect.Dispose();
}
//draw no effects, only actual bitmap 5945 times using SharpDX (60ms)
for (int i = 0; i < 5945; i++)
{
d2deviceContext.DrawBitmap(actualBmp, 1.0f, BitmapInterpolationMode.NearestNeighbor);
}
d2deviceContext.EndDraw();
swapchain.Present(1, PresentFlags.None);
}
After benching a lot, I realized the line that make it really slow is :
d2deviceContext.DrawImage(effect, new SharpDX.Vector2(points[0].X, points[0].Y));
My guess is my code or my setup does not use GPU acceleration of SharpDX like it should and this is why the code is really slow. I would expect at least better performance from SharpDX than GDI+ for this kind of stuff.
I'm currently working on a program for character recognition using C# and AForge.NET and now I'm struggling with the processing of blobs.
This is how I created the blobs:
BlobCounter bcb = new BlobCounter();
bcb.FilterBlobs = true;
bcb.MinHeight = 30;
bcb.MinWidth = 5;
bcb.ObjectsOrder = ObjectsOrder.XY;
bcb.ProcessImage(image);
I also marked them with rectangles:
Rectangle[] rects;
rects = bcb.GetObjectsRectangles();
Pen pen = new Pen(Color.Red, 1);
Graphics g = Graphics.FromImage(image);
foreach (Rectangle rect in rects)
{
g.DrawRectangle(pen, rect);
}
After execution my reference image looks like this:
BlobImage
As you can see, almost all characters are recognized. Unfortunately, some character include blobs inside a blob, e.g. "g", "o" or "d".
I would like to eliminate the blobs which are inside another blob.
I tried to adjust the drawing of the rectangles to achieve my objective:
foreach (Rectangle rect in rects)
{
for (int i = 0; i < (rects.Length - 1); i++)
{
if (rects[i].Contains(rects[i + 1]))
rects[i] = Rectangle.Union(rects[i], rects[i + 1]);
}
g.DrawRectangle(pen, rect);
}
...but it wasn't successful at all.
Maybe some of you can help me?
you can try to detect rectangles within rectangles by check their corner indices,
I have one MATLAB code for this which i have written for similar kind of problem:
Here is snippet of the code is:
function varargout = isBoxMerg(ReferenceBox,TestBox,isNewBox)
X = ReferenceBox; Y = TestBox;
X1 = X(1);Y1 = X(2);W1 = X(3);H1 = X(4);
X2 = Y(1);Y2 = Y(2);W2 = Y(3);H2 = Y(4);
if ((X1+W1)>=X2 && (Y2+H2)>=Y1 && (Y1+H1)>=Y2 && (X1+W1)>=X2 && (X2+W2)>=X1)
Intersection = true;
else
`Intersection = false;`
end
Where X and Y are upper left corner indices of the bounding rectangle; W and H are width and height respectively.
in above if Intersection variable becomes true that means the boxes having intersection. you can use this code for further customization.
Thank You
I create stage walls and a box inside on my mobile app using starling + as3.
Ok, now when I test the app the box falls but it does not match the walls, as if there
was an offset:
https://www.dropbox.com/s/hd4ehnfthh0ucfm/box.png
Here is how I created the boxes (walls and the box).
It seems like there is an offset hidden, what do you think?
public function createBox(x:Number, y:Number, width:Number, height:Number, rotation:Number = 0, bodyType:uint = 0):void {
/// Vars used to create bodies
var body:b2Body;
var boxShape:b2PolygonShape;
var circleShape:b2CircleShape;
var fixtureDef:b2FixtureDef = new b2FixtureDef();
fixtureDef.shape = boxShape;
fixtureDef.friction = 0.3;
// static bodies require zero density
fixtureDef.density = 0;
var quad:Quad;
bodyDef = new b2BodyDef();
bodyDef.type = bodyType;
bodyDef.position.x = x / WORLD_SCALE;
bodyDef.position.y = y / WORLD_SCALE;
// Box
boxShape = new b2PolygonShape();
boxShape.SetAsBox(width / WORLD_SCALE, height / WORLD_SCALE);
fixtureDef.shape = boxShape;
fixtureDef.density = 0;
fixtureDef.friction = 0.5;
fixtureDef.restitution = 0.2;
// create the quads
quad = new Quad(width, height, Math.random() * 0xFFFFFF);
quad.pivotX = 0;
quad.pivotY = 0;
// this is the key line, we pass as a userData the starling.display.Quad
bodyDef.userData = quad;
//
body = m_world.CreateBody(bodyDef);
body.CreateFixture(fixtureDef);
body.SetAngle(rotation * (Math.PI / 180));
_clipPhysique.addChild(bodyDef.userData);
}
The SetAsBox method takes half width and half height as its parameters. I'm guessing your graphics don't match your box2d bodies. So either you will need to make your graphics twice as big or multiply your SetAsBox params by 0.5. Also the body pivot will be in the center of it, so offset your movieclip accordingly depending on its pivot position.
Note that box2d has a debugrenderer which can outline your bodies for you to see what's going on.
I write some Windows Phone 7 APPs. I intend to visit the photo on cell phone.
I take a photo with the phone and the size of photo is 1944x2592 (W x H). Then I use
MediaLibrary mediaLibrary = new MediaLibrary();
for (int x = 0; x < mediaLibrary.Pictures.Count; ++x)
{
Picture pic = mediaLibrary.Pictures[x];
int w = pic.width;
int h = pic.height;
...
However, I found that the w is 2592 and the h is 1944. The value of Width and Height are reversed!
Who can tell me what's going on? what's the problem? I am looking forward to your reply! Thank you.
The camera detects the phone's orientation and stores it as metadata. So the height and width will always be the same, and the orientation when displayed in Zune, Picture Viewer, or most other programs will be read out of the metadata.
Here is a resource explaining it and providing sample code in C#. The particularly important part is right at the bottom. To use this, you will need this library (also a useful guide there):
void OnCameraCaptureCompleted(object sender, PhotoResult e)
{
// figure out the orientation from EXIF data
e.ChosenPhoto.Position = 0;
JpegInfo info = ExifReader.ReadJpeg(e.ChosenPhoto, e.OriginalFileName);
_width = info.Width;
_height = info.Height;
_orientation = info.Orientation;
PostedUri.Text = info.Orientation.ToString();
switch (info.Orientation)
{
case ExifOrientation.TopLeft:
case ExifOrientation.Undefined:
_angle = 0;
break;
case ExifOrientation.TopRight:
_angle = 90;
break;
case ExifOrientation.BottomRight:
_angle = 180;
break;
case ExifOrientation.BottomLeft:
_angle = 270;
break;
}
.....
}
There is a class associated with the program Physics Editor called GB2ShapeCache that loads shapes that I make in the program. I noticed that it is not currently possible to change the scale of the shapes on the fly so I would like to be able to scale the fixtures for the shapes that I made in Physics Editor. Now the scale of my CCSprite in my app can be random so currently in the addShapesWithFile method, I do this for polygons:
vertices[vindex].x = (offset.x * sprite.scaleX) / ptmRatio_;
vertices[vindex].y = (offset.y * sprite.scaleY) / ptmRatio_;
and this for circles:
circleShape->m_radius = ([[circleData objectForKey:#"radius"] floatValue] / ptmRatio_) *sprite.scale;
I also changed the method so that I can pass in my sprite so I can get the scale to:
-(void) addShapesWithFile:(NSString*)plist forSprite:(CCSprite*)sprite
so that I can pass in my sprite so I can get the scale.
HOWEVER, I find this to be inefficient because I should not have to reload ALL my shapes in my plist since they are already added.
So is there any way to do what I am doing now but in the addFixturesToBody method? This way I do not re-create the already added plist shapes and I only scale the fixtures when it is ready to be added to my body.
If anyone needs to see more code or needs more info, feel free to ask. I know this issue must be simple!!!
Thanks!
I would recommend implementing it in the addFixturesToBody method.
(see https://github.com/AndreasLoew/GBox2D/blob/master/GBox2D/GB2ShapeCache.mm)
Try this method below, this should scale the shapes accordingly to the sprite's they are for. Just pass in your CCSprite and this method will handle the rest.
- (void)addFixturesToBody:(b2Body*)body forShapeName:(NSString*)shape forSprite:(CCSprite*)sprite {
BodyDef *so = [shapeObjects_ objectForKey:shape];
assert(so);
FixtureDef *fix = so->fixtures;
if ((sprite.scaleX == 1.0f) && (sprite.scaleY == 1.0f)) {
// simple case - so do not waste any energy on this
while(fix) {
body->CreateFixture(&fix->fixture);
fix = fix->next;
}
} else {
b2Vec2 vertices[b2_maxPolygonVertices];
while(fix) {
// make local copy of the fixture def
b2FixtureDef fix2 = fix->fixture;
// get the shape
const b2Shape *s = fix2.shape;
// clone & scale polygon
const b2PolygonShape *p = dynamic_cast<const b2PolygonShape*>(s);
if(p)
{
b2PolygonShape p2;
for(int i=0; i<p->m_vertexCount; i++)
{
vertices[i].x = p->m_vertices[i].x * sprite.scaleX;
vertices[i].y = p->m_vertices[i].y * sprite.scaleY;
}
p2.Set(vertices, p->m_vertexCount);
fix2.shape = &p2;
}
// clone & scale circle
const b2CircleShape *c = dynamic_cast<const b2CircleShape *>(s);
if(c) {
b2CircleShape c2;
c2.m_radius = c->m_radius * sprite.scale;
c2.m_p.x = c->m_p.x * sprite.scaleX;
c2.m_p.y = c->m_p.y * sprite.scaleY;
fix2.shape = &c2;
}
// add to body
body->CreateFixture(&fix2);
fix = fix->next;
}
}
}