The Width and Height are reversed, when uses MediaLibrary.Pictures to get a pic - windows-phone-7.1

I write some Windows Phone 7 APPs. I intend to visit the photo on cell phone.
I take a photo with the phone and the size of photo is 1944x2592 (W x H). Then I use
MediaLibrary mediaLibrary = new MediaLibrary();
for (int x = 0; x < mediaLibrary.Pictures.Count; ++x)
{
Picture pic = mediaLibrary.Pictures[x];
int w = pic.width;
int h = pic.height;
...
However, I found that the w is 2592 and the h is 1944. The value of Width and Height are reversed!
Who can tell me what's going on? what's the problem? I am looking forward to your reply! Thank you.

The camera detects the phone's orientation and stores it as metadata. So the height and width will always be the same, and the orientation when displayed in Zune, Picture Viewer, or most other programs will be read out of the metadata.
Here is a resource explaining it and providing sample code in C#. The particularly important part is right at the bottom. To use this, you will need this library (also a useful guide there):
void OnCameraCaptureCompleted(object sender, PhotoResult e)
{
// figure out the orientation from EXIF data
e.ChosenPhoto.Position = 0;
JpegInfo info = ExifReader.ReadJpeg(e.ChosenPhoto, e.OriginalFileName);
_width = info.Width;
_height = info.Height;
_orientation = info.Orientation;
PostedUri.Text = info.Orientation.ToString();
switch (info.Orientation)
{
case ExifOrientation.TopLeft:
case ExifOrientation.Undefined:
_angle = 0;
break;
case ExifOrientation.TopRight:
_angle = 90;
break;
case ExifOrientation.BottomRight:
_angle = 180;
break;
case ExifOrientation.BottomLeft:
_angle = 270;
break;
}
.....
}

Related

iOS 11.3 ARKit Autofocus and higher resolution

As mentioned here Apple is allowing us to use higher resolution and Autofocus in the apps based on ArKit, have anybody already tried implementing those features in own apps ?
Where apple is usually sharing more technical details about such updates ?
Regards !
You haven’t been able to set the ARCamera image resolution... which is why you may not find anything relating to adjusting the camera image resolution. Changing ARCamera Resolution
If you want to check the ARCamera image resolution you can do so access via the currentFrame.
let currentFrame = sceneView.session.currentFrame
print(currentFrame?.camera.imageResolution)
to date it was set to 1280.0, 720.0
if you want more information about focal-length, which i believe auto-focus may now be able to adjust automatically. You can just check the camera property of currentFrame.
print(currentFrame?.camera)
Ok, autofocus can be added with:
var isAutoFocusEnabled: Bool { get set }
var configuration = ARWorldTrackingConfiguration()
configuration.isAutoFocusEnabled = true // or false
Any clues what about higher res ?
I am able to select desired resolution for AR camera using following code -
ARWorldTrackingConfiguration* configuration = [ARWorldTrackingConfiguration new];
NSArray<ARVideoFormat*>* supportedVideoFormats = [ARWorldTrackingConfiguration supportedVideoFormats];
int bestFormatIndex = 0;
for (int i = 0; i < [supportedVideoFormats count]; i++) {
float width = supportedVideoFormats[i].imageResolution.width;
float height = supportedVideoFormats[i].imageResolution.height;
NSLog(#"AR Video Format %f x %f", width, height);
if (width * 9 == height * 16) { // <-- Use your own condition for selecting video format
bestFormatIndex = i;
break;
}
}
[configuration setVideoFormat:supportedVideoFormats[bestFormatIndex]];
// Run the view's session
[_mSceneView.session runWithConfiguration:configuration];
For my requirement I wanted biggest 16:9 ratio. On iPhone 8+, following image resolutions are getting listed -
1920 x 1440
1920 x 1080
1280 x 720
Notice that they are sorted in the supportedVideoFormats array, with biggest resolution at 0 index. And 0th index is the default selected video format.

(iOS) Accelerometer Graph (convert g-force to +/- 128) granularity

I am using this Accelerometer graph from Apple and trying to convert their G-force code to calculate +/- 128.
The following image shows that the x, y, z values in the labels do not match the output on the graph: (Note that addX:y:z values are what is shown in the labels above the graph)
ViewController
The x, y, z values are received from a bluetooth peripheral, then converted using:
// Updates LABELS
- (void)didReceiveRawAcceleromaterDataWithX:(NSInteger)x Y:(NSInteger)y Z:(NSInteger)z
{
dispatch_async(dispatch_get_main_queue(), ^{
_labelAccel.text = [NSString stringWithFormat:#"x:%li y:%li z:%li", (long)x, (long)y, (long)z];
});
}
// Updates GRAPHS
- (void)didReceiveAcceleromaterDataWithX:(NSInteger)x Y:(NSInteger)y Z:(NSInteger)z
{
dispatch_async(dispatch_get_main_queue(), ^{
float xx = ((float)x) / 8192;
float yy = ((float)y) / 8192;
float zz = ((float)z) / 8192;
[_xGraph addX:xx y:0 z:0];
[_yGraph addX:0 y:yy z:0];
[_zGraph addX:0 y:0 z:zz];
});
}
GraphView
- (BOOL)addX:(UIAccelerationValue)x y:(UIAccelerationValue)y z:(UIAccelerationValue)z
{
// If this segment is not full, then we add a new acceleration value to the history.
if (index > 0)
{
// First decrement, both to get to a zero-based index and to flag one fewer position left
--index;
xhistory[index] = x;
yhistory[index] = y;
zhistory[index] = z;
// And inform Core Animation to redraw the layer.
[layer setNeedsDisplay];
}
// And return if we are now full or not (really just avoids needing to call isFull after adding a value).
return index == 0;
}
- (void)drawLayer:(CALayer*)l inContext:(CGContextRef)context
{
// Fill in the background
CGContextSetFillColorWithColor(context, kUIColorLightGray(1.f).CGColor);
CGContextFillRect(context, layer.bounds);
// Draw the grid lines
DrawGridlines(context, 0.0, 32.0);
// Draw the graph
CGPoint lines[64];
int i;
float _granularity = 16.f; // 16
NSInteger _granualCount = 32; // 32
// X
for (i = 0; i < _granualCount; ++i)
{
lines[i*2].x = i;
lines[i*2+1].x = i + 1;
lines[i*2].y = xhistory[i] * _granularity;
lines[i*2+1].y = xhistory[i+1] * _granularity;
}
CGContextSetStrokeColorWithColor(context, _xColor.CGColor);
CGContextStrokeLineSegments(context, lines, 64);
// Y
for (i = 0; i < _granualCount; ++i)
{
lines[i*2].y = yhistory[i] * _granularity;
lines[i*2+1].y = yhistory[i+1] * _granularity;
}
CGContextSetStrokeColorWithColor(context, _yColor.CGColor);
CGContextStrokeLineSegments(context, lines, 64);
// Z
for (i = 0; i < _granualCount; ++i)
{
lines[i*2].y = zhistory[i] * _granularity;
lines[i*2+1].y = zhistory[i+1] * _granularity;
}
CGContextSetStrokeColorWithColor(context, _zColor.CGColor);
CGContextStrokeLineSegments(context, lines, 64);
}
How can I calculate the above code to show the correct accelerometer values on the graph with precision?
I post this as an aswer not a comment, because I have not enough reputation, but what I'll write might be enough to send you in the right direction, that it even may count as an answer...
Your question still doesn't include what is really important. I assume the calculation of the xx/yy/zz is no problem. Although I have no idea what the 8192 is supposed to mean.
I guess the preblem is in the part where you map your values to pixel coordinates...
the lines[] contains your values in a range of 1/8192th of the values in the label. so your x value of -2 should be at a pixel position of -0.0000something, so slightly(far less than 1 Pixel) above the view... Because you see the line a lot further down there must be some translation in place (not shown in your code)
The second part that is important but not shown is DrawGridlines. Probably in there is a different approach to map the values to pixel-coordinates...
Use the debugger to check what pixel-coordinates you get when draw your +127-line and what you get if you insert the value of +127 in your history-array
And some Ideas for improvements when reading your code:
1.)Put the graph in it's own class that draws one graph(and has only one history. Somehow you seem to have that partially already (otherwise I cannot figure out your _xGraph/_yGraph/_zGraph) But on the other hand you draw all 3 values in one drawLayer??? Currently you seem to have 3*3 history buffers of which 3*2 are filled with zeros...
2.) use one place where you do the calculation of Y that you use both for drawing the grid and drawing the lines...
3.) use CGContextMoveToPoint(); + CGContextAddLineToPoint(); instead of copying into lines[] with these ugly 2*i+1 indecies...

Template Matching on various sizes

Right now I am working on an OCR algorithm with Template Matching, using the opencv library. I am comparing pixel by pixel, and till now I have obtained good results. The problem comes when the area I want to match is of different size.
Ex: Template size = 70x100 while ROI = 140x200.
Is there any function that I can use in order adapt the required size and end up with the same amount of rows and columns?
Thanks
Robert Grech
Usually one makes an image scale pyramid and then only scans with the 70x100 windows across all scales i.e. as in opencv HOGDescriptor:
double scale = 1.;
double scale0 = 1.05;
int maxLevels = 64;
int nLevels;
Size templateSize(70,100);
cv::Mat testImage = cv::imread("test1.jpg");
vector<double> levelScale;
for( nLevels = 0; nLevels < maxLevels; nLevels++ )
{
levelScale.push_back(scale);
if( cvRound(testImage.cols/scale) < templateSize.width ||
cvRound(testImage.rows/scale) < templateSize.height ||
scale0 <= 1 )
break;
scale *= scale0;
}
nLevels = std::max(nLevels, 1);
levelScale.resize(nLevels);
int level;
for(level =0; level<nLevels; level++)
{
cv::Mat testAtScale;
Size sz(cvRound(testImage.cols/levelScale[level]),
cvRound(testImage.rows/levelScale[level]));
resize(testImage,testAtScale,sz);
//result = match(template,testAtScale);
//cv::imshow("sclale",testAtScale);
//cv::waitKey();
}
you would then need to post-process your results back to the original scale, this is simple with a box, but if you have a heat map / response map / probability map, then re-sizing it back up maybe somewhat hacky.

how to detect if iphone is still?

I have to make an app in which user can take photo only when iPhone is still. Can you please tell me how to proceed with that. Any help will be appreciated.
Below is the code that I have tried, please Suggest improvement on it, this code is giving jerky output
_previousMotionValue = 0.0f;
memset(xQueue, 0, sizeof(xQueue));
memset(yQueue, 0, sizeof(yQueue));
queueIndex = 0;
[_motionManager startAccelerometerUpdatesToQueue:_motionManagerUpdatesQueue withHandler:^(CMAccelerometerData *accelerometerData, NSError *error) {
if ([_motionManagerUpdatesQueue operationCount] > 1) {
return;
}
xQueue[queueIndex] = -accelerometerData.acceleration.x;
yQueue[queueIndex] = accelerometerData.acceleration.y;
queueIndex++;
if (queueIndex >= QueueCapacity) {
queueIndex = 0;
}
float xSum = 0;
float ySum = 0;
int i = 0;
while (i < QueueCapacity)
{
xSum += xQueue[i];
ySum += yQueue[i];
i++;
}
ySum /= QueueCapacity;
xSum /= QueueCapacity;
double motionValue = sqrt(xSum * xSum + ySum * ySum);
CGFloat difference = 50000.0 * ABS(motionValue - _previousMotionValue);
if (difference < 100)
{
//fire event for capture
}
[view setVibrationLevel:difference];
_previousMotionValue = motionValue;
}];
Based on vibration level, I am setting the different images like green, yellow, red.
I have chosen threshold 100.
To answer “…user can take photo only when iPhone is stabilized…?”:
You can use CoreMotion.framework and its CMMotionManager to obtain info about device movement. (I guess you are interested in accelerometer data.) These data will come at high rate (you can choose frequency, default if 1/60 s). Then you store (let's say) 10 latest values and make some statistics about the average and differences. By choosing optimal treshold you can tell when the device is in stabilized position.
But you mentioned image stabilization, which is not the same as taking photos in stabilized position. To stabilize image, I guess you will have to adjust the captured image by some small offset calculated from device motion.

Using Core Motion in landscape mode

I am currently working on augmented reality and for that purpose I'd like to use the gyroscope and Core Motion. I've studied the Apple pARk sample code, I understand most of the maths I've spend time on reading documentation because at first glance it was not clear! Everything is fine until I try to make it work in landscape mode.
I won't explain all the theory here it would be too long. But for those who experienced it, my problem is, we take the rotation matrix of the attitude to apply this rotation to our coordinates. Ok, it is fine until here, but it seems Core Motion doesn't adapt it to Landscape Mode. I saw similar questions on this subject but it looks like no one has a solution.
So I tried to make my own, here is what I think:
Everytime we rotate the device to landscape, a rotation of +-90° is made (depending on Landscape Left or right). I decided to create a 4X4 rotation matrix to apply this rotation. And then multiply it to the cameraTransform matrix (adaption of the attitude's 3X3 CMRotationMatrix to 4X4), we obtain then the matrix cameraTransformRotated:
- (void)createMatLandscape{
switch(cameraOrientation){
case UIDeviceOrientationLandscapeLeft:
landscapeRightTransform[0] = cos(degreesToRadians(90));
landscapeRightTransform[1] = -sin(degreesToRadians(90));
landscapeRightTransform[2] = 0;
landscapeRightTransform[3] = 0;
landscapeRightTransform[4] = sin(degreesToRadians(90));
landscapeRightTransform[5] = cos(degreesToRadians(90));
landscapeRightTransform[6] = 0;
landscapeRightTransform[7] = 0;
landscapeRightTransform[8] = 0;
landscapeRightTransform[9] = 0;
landscapeRightTransform[10] = 1;
landscapeRightTransform[11] = 0;
landscapeRightTransform[12] = 0;
landscapeRightTransform[13] = 0;
landscapeRightTransform[14] = 0;
landscapeRightTransform[15] = 1;
multiplyMatrixAndMatrix(cameraTransformRotated, cameraTransform, landscapeRightTransform);
break;
case UIDeviceOrientationLandscapeRight:
landscapeLeftTransform[0] = cos(degreesToRadians(-90));
landscapeLeftTransform[1] = -sin(degreesToRadians(-90));
landscapeLeftTransform[2] = 0;
landscapeLeftTransform[3] = 0;
landscapeLeftTransform[4] = sin(degreesToRadians(-90));
landscapeLeftTransform[5] = cos(degreesToRadians(-90));
landscapeLeftTransform[6] = 0;
landscapeLeftTransform[7] = 0;
landscapeLeftTransform[8] = 0;
landscapeLeftTransform[9] = 0;
landscapeLeftTransform[10] = 1;
landscapeLeftTransform[11] = 0;
landscapeLeftTransform[12] = 0;
landscapeLeftTransform[13] = 0;
landscapeLeftTransform[14] = 0;
landscapeLeftTransform[15] = 1;
multiplyMatrixAndMatrix(cameraTransformRotated, cameraTransform, landscapeLeftTransform);
break;
default:
cameraTransformRotated[0] = cameraTransform[0];
cameraTransformRotated[1] = cameraTransform[1];
cameraTransformRotated[2] = cameraTransform[2];
cameraTransformRotated[3] = cameraTransform[3];
cameraTransformRotated[4] = cameraTransform[4];
cameraTransformRotated[5] = cameraTransform[5];
cameraTransformRotated[6] = cameraTransform[6];
cameraTransformRotated[7] = cameraTransform[7];
cameraTransformRotated[8] = cameraTransform[8];
cameraTransformRotated[9] = cameraTransform[9];
cameraTransformRotated[10] = cameraTransform[10];
cameraTransformRotated[11] = cameraTransform[11];
cameraTransformRotated[12] = cameraTransform[12];
cameraTransformRotated[13] = cameraTransform[13];
cameraTransformRotated[14] = cameraTransform[14];
cameraTransformRotated[15] = cameraTransform[15];
break;
}
}
Then just before we update all the points I do this:
multiplyMatrixAndMatrix(projectionCameraTransform, projectionTransform, cameraTransformRotated);
After that the rest of the code remains unchanged, I just want the annotation to be displayed properly in landscape orientation. For now this is the only idea I have, the rendering in landscape is not good, I move the device to the right or the left hand-side, the annotations go down or up (like it was when I didn't add this code).
Has anyone come up with a solution? I'll keep on searching, especially on the CMRotationMatrix, it doesn't seem it is a typical rotation matrix, I can't find any documentation saying precisely what are the different elements of this matrix.
I just managed to adapt this (Apple's pARk sample) to landscape (right) yesterday and would like to share the changes made. It appears to work correctly, but please call out any mistakes. This only supports landscape right but can probably be adapted easily for left.
In ARView.m,
In -(void)initialize, switch the bounds height and width
createProjectionMatrix(projectionTransform, 60.0f*DEGREES_TO_RADIANS, self.bounds.size.height*1.0f / self.bounds.size.width, 0.25f, 1000.0f);
In -(void)startCameraPreview
[captureLayer setOrientation:AVCaptureVideoOrientationLandscapeRight];
In -(void)drawRect:
//switch x and y
float y = (v[0] / v[3] + 1.0f) * 0.5f;
float x = (v[1] / v[3] + 1.0f) * 0.5f;
poi.view.center = CGPointMake(self.bounds.size.width-x*self.bounds.size.width, self.bounds.size.height-y*self.bounds.size.height); //invert x

Resources