I want to render virtual content over the camera image from the back-facing camera of the iPad 2. To achieve this, OpenGL ES is used to transform the content into the correct screen coordinates.
projectionMatrix =
GLKMatrix4MakePerspective(GLKMathDegreesToRadians(FOV),
16.0f / 9.0f, 0.05f, 5.0f);
The problem is the field of view parameter.
There were several posts about iPhone or the iPad 1; however, I couldn't find one yet for iPad 2.
What is the field of view of the iPad 2 in landscape 16:9 HD mode?
This blog will probably help you:
http://hunter.pairsite.com/blogs/20110317/
In particular,
Using some basic trigonometry, this allowed me to determine that 4:3 stills taken with the iPad 2 back camera have an approximate 34.1 degree vertical field of view and an approximate 44.5 degree horizontal field of view. This equates roughly to a hypothetical 43mm focal length lens on a 35mm camera.
Related
I'm writing a Metal app for iPhone. I have tons of OpenGL experience, so it shouldn't be too hard, right?
WRONG.
I'm rendering this 2d scene of rectangles with no aspect ratio correction - the vertices are in [-1,1] x [-1,1] coordinates, so this should fill the entire screen and distort the scene to fit the screen.
BTW, this is running on a relatively new iPhone, version 12.1.2 (16C101).
In landscape mode (width > height), this is what I get (screencapped image): https://imgur.com/r7gJXct. Half of the screen is blank.
In portrait mode (height > width), I get exactly what I expected (distorted squares): https://imgur.com/ZoPoHhR.
I think what is happening is that Metal just renders the portrait mode, and clips whatever goes off the screen, without "squishing" it to the screen viewport.
The code is just basic metal code, no configuration. I took the default Metal iOS project, removed the code in Renderer, and followed the Hello Triangle tutorial with uniforms sent to the shader and different vertex data.
How should I go about fixing this "bug"?
Is it even a bug?
Figured it out!
I guess in func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize), I have to add the line metalLayer.frame = view.layer.frame and it works.
I'm writing a project in Xcode 7 / Swift 2 that it is optimized for iPhone 6/6s (i.e. the project has a launch screen file and launch screen images for iPhone 6/6s).
Fortunately or unfortunately, iPhone 6 users have the ability to turn on the ‘Display Zoom’ setting on device which enlarges elements of the interface. When turned on, this setting effectively enlarges a standard iPhone 5 screen size to fit in the iPhone 6 screen space, upsampling to x1.171875. This upsampling causes elements that are raster based such as images, icons, or views that contain UIBezierPath() drawings to display blurred (mildly but noticable).
A few questions:
Appreciate any experienced responses on this conundrum. Thanks.
1 - How can I instruct elements (e.g. a UIView) on the Storyboard in code to disregard the Display Zoom setting when the user has turned it on?
2 - What techniques are there to ensure pixel perfect accuracy remains when Display Zoom is on? (e.g. Is it possible to render graphics using OpenGL rendering, if so, how?)
3 - Is it possible to replace a x2 image with a x4 image to reduce any blurring when Display Zoom is on? (i.e. will iOS downsample a x4 image to x2 image on iPhone 6?)
4 - How can UIBezierPath() drawings maintain pixel perfect accuracy when Display Zoom is on?
There's nothing you can do about this. A user who chooses zoomed mode is deliberately throwing away pixel accuracy. The points in the drawing no longer match the pixels on the screen one-to-one (or one-to-two or one-to-three or any integral ratio). This choice therefore blurs the screen for everything the user does, not just your app.
Nor can you detect what is happening, because in effect zoomed iPhone 6 is presented to your app as an iPhone 5 (and a zoomed 6 Plus is presented to your app as a 6).
As #matt says, there's nothing you can do about this for normal UIKit content
However, for Open GL ES or Metal content, you are able to opt-out of the sampling that the device does, and render straight into the device's physical coordinates - allowing for pixel perfect drawing.
In a graphics app that uses Metal or OpenGL ES, content can be easily rendered at the precise dimensions of the display without requiring an additional sampling stage. This is critical in high-performance 3D apps that perform many calculations for each rendered pixel. Instead, create buffers to render into that are the exact resolution of the display.
Open GL ES
Set the contentsScale of the CAEAGLayer to the [UIScreen mainScreen].bounds.nativeScale, or use a GLKView which will automatically do this.
You will then want to create your framebuffer with the size of the device's physical coordinates.
Metal
Set the contentsScale of your CAMetalLayer to the [UIScreen mainScreen].bounds.nativeScale, or use an MTKView which will automatically do this.
You will also want to adjust the drawable size to account for the scale (lifted from the docs):
CGSize drawableSize = self.bounds.size;
drawableSize.width *= self.contentScaleFactor;
drawableSize.height *= self.contentScaleFactor;
metalLayer.drawableSize = drawableSize;
See also this interesting blog post on how the iPhone 6 Plus renders content, plus the follow-up post specifically about Display Zoom.
I am using the AVFoundation framework to have a camera view on my iPhone app.
On iPhone 5 and further, the camera fills the entire view where it's in. On iPhone 4/4S, the view isn't filled entirely and I get blank spaces on both sides of the view. I believe it's not because of the constraints, but because of the proportion of the iPhone's screen :
With the preset AVCaptureSessionPresetPhoto, on iPhone 5/6, the ratio of the image is 3/4. On iPhone 4/4S with the same preset, the ratio of the image is 9/16. It looks like the shape of my camera's view can be filled with a 3/4 image, not a 9/16 image.
So I looked for all the preset available, and I tried the preset AVCaptureSessionPreset640x480 thinking that the problem would be solved since 640*480 delivers a 3/4 image... But it didn't fixed anything, I still have blank spaces on both sides of the view.
Is there a way to adapt the resolution with the AVFoundation framework ?
(I think my problem would be easier to understand with images, but I'm not authorized to post some yet)
I didn't found a way to use the camera on the exact ratio i wanted, so I just created a new ViewController that I only use on the iPhone 4/4S, with a different interface for this device
Who can give me an minimal example in IOS 5 or higher
of an app that captures the video of the backside cam with the following requirements:
The resolution of the image captured by the camera is 1280x720
The preview layer is a AVCaptureVideoPreviewLayer and fills the full screen on both IPhone 5 and 4
The orientation of both video and preview is LandScapeRight
If I want to zoom the preview, I correctly zoom into the center of my landscape image
I got everything working until 4.). But everytime I try to apply the zoom with
viewLayer.affineTransform = CGAffineTransformMakeScale(2, 2);
I observe that in landscape orientation the center of the focused image is left below the center of the unzoomed image. That is the problem I am trying to solve. So if I set the scale to 1, 1.3, 1.6 or 2.0 I want to always see the center of my unzoomed image.
Kind of a fun question. I am hoping that is generates a lot of good thinking.
I am in the design, early alpha stage of my current orthogonal game project. I am experimenting with different tile sizes. A few questions as I would like to step off on the right foot.
Should we differentiate tile size (80x80, 32x32 etc) on retina vs. non retina displays, iPhone vs iPad displays?
What is a good recommended tile size that accommodates both the designer and the artist... and why?
Goals:
I would like to a clean, crisp visuals no matter the display format. Cartoony, colorful 16bit to 32bit images no matter the display.
I would like to keep to a 1024x1024 texture size for my atlas. I am hoping this will give me enough tiles to make the world look good and not crush my tile map system.
My current map size is 200 tiles wide x 120 tiles high. The map will be a a low detail (nautically focused) mercator projection of Earth.
Thanks in advance for all the good advice.
E
I usually try to make my games for iPad screen aspect where I'm making sure that the important elements are in a smaller Safe Zone. And the UI can be anchored on specified distance from the edges. Then for iPhone screen aspect I crop a small portion of the screen and layout the UI accordingly.
So if you are working in landscape here are the sizes you need to support:
480x320 - iPhone (0.5)
960x640 - iPhone Retina (1)
1024x768 - iPad (1)
2048x1736 - iPad Retina (2)
The number in brackets indicate the scale. I just like picking iPad (1024x768) for my ingame units. At this point I have all textures in 3 sizes, since I'm using OpenGL I use different mipmaps for each resolution I need. My texture loading function can skip mipmap levels so that on devices that I don't need high res I safe memory and loading time.
Depends if you need to click on individual tiles. In case you need to I'll suggest using 64x64 on iPhone (480x320) 256x256 on iPad Retina (2048x1736). Having all your art in power of 2 is always good. If the size is too large then consider 48x48 for iPhone and 192x192 for iPad Retina. If your game requires you can have smaller tiles but consider having larger active zone around the entities that you have to click (hopefully not every tile will be clickable).
I faced a similar issue a while ago and realized I was tackling the problem from the wrong angle.
You first need to consider the average finger/thumb size of the user and determine how many pixels/points consume that space.
From there you can derive the non-Retina Display pixel units and Retina Display point units to use.
N.B. that a game that might play well on the iPad might not work on the iPhone if the user's fingers obscure the view.