Why is AVCaptureDevice.maxAvailableVideoZoomFactor so large? - ios

On my iPad with a digital zoom of 5x, the reported captureDevice.maxAvailableVideoZoomFactor for the camera device is 153.
Shouldn't it be 5? Am I using the right property to get the max zoom factor?

I found this from link:
On single-camera devices, this value is always equal to the device format’s videoMaxZoomFactor value. On a dual-camera device, the allowed range of video zoom factors can change if the device is delivering depth data to one or more capture outputs.
So if your iPad has dual/triple cameras and you are using buildInDualCamera, etc. It can return a bigger max zoom level.

Related

Making a ruler functionality within the app, but different screen sizes?

I'm working on a project which particularly in this ViewController, I have set up a scale as a UIImageView and it needs to react on touch events based on that scale and to produce some output. However, my question is, how to implement accuracy for all of the different devices? I was thinking to try with if else statements for every single device (iPhone 4,5,6) and assign the properties based on those conditions. But that would be like dirty coding, or no? Is there any other method for this type of functionality? I need you to give me some opinions or tips just to put me on the right track. Thanks
You can use UIScreen's scale property to determine if the device has a retina screen (#2x or #3x), which will help to some extent. At present, every iPhone has the same number of points per inch (163), with differing numbers of pixels per inch depending on the device. The iPads are a different matter, though, as the iPad Mini has the same point density as an iPhone, but the 9.7" iPad and the iPad Pro have a lower density. I think for the iPads you'll need to detect the specific device to be able to figure out the physical screen size.

Is this possible to draw a physical size 5 cm * 5 cm square on the iOS device?

I would like to draw a physically 5cm * 5cm square, instead of using pixel. Instead of getting all the iOS device screen information, is there any generic way to do it? Thanks.
There is not a generic way that is 100% accurate. You would need to compile a database of all the different devices and have an appropriate pixel size figured out ahead of time.

Cocosd2d and body.m_radius depend on device type

I'm little bit confused by points in cocos2d.
I have universal game and when I set possition of body in points, it works well over all devices (iPhone, iPhone HD, iPad, iPad HD). I made textures in 4 sizes with cocos2d suffix and it works well too.
But I have body (b2CircleShape) and i need to set m_radius of this circle.
I have this lines:
b2CircleShape myDynamicBody;
myDynamicBody.m_radius=0.48;
Value 0.48 is optimized for iPhone HD and it works well on both iPhone (iPhone and iPhone HD) but on iPad and iPad HD devices it is very small. What should I do? Check type of device and when I find iPad multiply it in 1.33 (when I tried to multiply it in 1.33 it worked fine on both iPads).
Or is there any better (or recomended) way, how to solve this problem?
Thank you for advices
You should adjust your points-to-meter (PTM_RATIO) on iPad.
The reason for 'points', is that Box2d is tuned to run physics simulations for bodies between 0.1 and 10 meters, with a typical body being about 1x1 meter. It will still work for bodies outside of these sizes, but will be less reliable.
On the other hand, you've got a screen with either ~320x480 or 1024x768 points of resolution. Therefore we need a way to convert the sprite representation to a size that is suitable for Box2d. For this we use the 'points-to-meter' ration (PTM).
If your typical sprite is 64x64 you should choose a PTM ratio of 64. This will make box2d see you 64x64 sprite as 1x1, which is an ideal size to run simulations on it.
Having explained that, the reason that the PTM_RATIO varies between phone and pad form factors is now obvious: Its because they have different resolutions. A sprite that is 64x64 on iPhone would be approx 128x128 on tablet.
So to get your PTM_RATIO on ipad, choose a sprite on iPhone and look at the corresponding size on iPad. Now multiply the iPhone PTM by the ratio of the sizes beteen those two images.

cvCaptureFromCAM() / cvQueryFrame(): get native resolution of connected camera?

I'm using the two OpenCV functions mentioned above to retrieve frames from my webcam. No additional properties are set, just running with default parameters.
Here cvQueryFrame() always returns frames in size 640x480, independent from the native resolution of the camera. But how can I get full size frames when I don't know the exact resolution of it and therefore can't set width and height property? Is there a way to reset these 640x480 settings? Or is there a possibility to query the device for the maximum resolution it supports?
Thanks!

Does CGContextConvertRectToDeviceSpace work properly on retina devices?

I'm building an iOS-based core text application (iOS 5, iPad), and have a view within which I'm performing some typographic calculations (in drawRect).
I've noticed that when I use CGContextConvertRectToDeviceSpace, the result I get on a retina device is double that of a non-retina device. It's my understanding that within drawRect there is an implicit transform applied that should hide any retina/non-retina device differences, so I'm confused why I should see any difference at all.
More specifically, I'm trying to calculate the location in user coords of a given CTLine in user space coords. I first use CTLineGetTypographicBounds for my line, and use those values in CGContextConvertRectToDeviceSpace to get a rect... but the rect I get back has double the width and height on a retina device versus a non-retina one. Is there something I'm missing?
I've noticed that when I use CGContextConvertRectToDeviceSpace, the result I get on a retina device is double that of a non-retina device.
Sounds normal to me. "Device space" represents the actual pixels on the output device, which in this case is the retina display. It's normally scaled 2x larger than the CGContext's user space coordinate system.
Normally it's rare to need to convert to or from device space -- you would only do it if you absolutely positively needed to align your drawing to real pixels.
What are you trying to accomplish by using CGContextConvertRectToDeviceSpace? There may be an alternative.

Resources