iOS GLKit texture blurry on retina display - ios

I am working with OpenGL ES 2.0 and GLKit for iOS.
My app only needs to run at a resolution of 480 by 320 just like the pre iPhone 4 displays as it is using retro style graphics.
The texture graphics are made according to this resolution and a GLKit projection matrix of (0, 480, 0, 320).
This all looks fine on the 3GS but on later models OpenGL (understandably) does some sort of resizing in order to stretch the scene. This resizing results in an undesirable blurring/smoothing of the graphics - probably using some sort of default interpolation scheme.
Is it possible to affect the way this resizing is done by OpenGL? Preferably setting it to no interpolation where the pixels are just directly enlarged.

You need to set the scaling filters on the view like this.
self.layer.magnificationFilter = kCAFilterNearest;
self.layer.minificationFilter = kCAFilterNearest;

Related

Odd difference between portrait and landscape mode with iPhone metal

I'm writing a Metal app for iPhone. I have tons of OpenGL experience, so it shouldn't be too hard, right?
WRONG.
I'm rendering this 2d scene of rectangles with no aspect ratio correction - the vertices are in [-1,1] x [-1,1] coordinates, so this should fill the entire screen and distort the scene to fit the screen.
BTW, this is running on a relatively new iPhone, version 12.1.2 (16C101).
In landscape mode (width > height), this is what I get (screencapped image): https://imgur.com/r7gJXct. Half of the screen is blank.
In portrait mode (height > width), I get exactly what I expected (distorted squares): https://imgur.com/ZoPoHhR.
I think what is happening is that Metal just renders the portrait mode, and clips whatever goes off the screen, without "squishing" it to the screen viewport.
The code is just basic metal code, no configuration. I took the default Metal iOS project, removed the code in Renderer, and followed the Hello Triangle tutorial with uniforms sent to the shader and different vertex data.
How should I go about fixing this "bug"?
Is it even a bug?
Figured it out!
I guess in func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize), I have to add the line metalLayer.frame = view.layer.frame and it works.

'Display Zoom' iPhone 6/6s setting blurs graphics

I'm writing a project in Xcode 7 / Swift 2 that it is optimized for iPhone 6/6s (i.e. the project has a launch screen file and launch screen images for iPhone 6/6s).
Fortunately or unfortunately, iPhone 6 users have the ability to turn on the ‘Display Zoom’ setting on device which enlarges elements of the interface. When turned on, this setting effectively enlarges a standard iPhone 5 screen size to fit in the iPhone 6 screen space, upsampling to x1.171875. This upsampling causes elements that are raster based such as images, icons, or views that contain UIBezierPath() drawings to display blurred (mildly but noticable).
A few questions:
Appreciate any experienced responses on this conundrum. Thanks.
1 - How can I instruct elements (e.g. a UIView) on the Storyboard in code to disregard the Display Zoom setting when the user has turned it on?
2 - What techniques are there to ensure pixel perfect accuracy remains when Display Zoom is on? (e.g. Is it possible to render graphics using OpenGL rendering, if so, how?)
3 - Is it possible to replace a x2 image with a x4 image to reduce any blurring when Display Zoom is on? (i.e. will iOS downsample a x4 image to x2 image on iPhone 6?)
4 - How can UIBezierPath() drawings maintain pixel perfect accuracy when Display Zoom is on?
There's nothing you can do about this. A user who chooses zoomed mode is deliberately throwing away pixel accuracy. The points in the drawing no longer match the pixels on the screen one-to-one (or one-to-two or one-to-three or any integral ratio). This choice therefore blurs the screen for everything the user does, not just your app.
Nor can you detect what is happening, because in effect zoomed iPhone 6 is presented to your app as an iPhone 5 (and a zoomed 6 Plus is presented to your app as a 6).
As #matt says, there's nothing you can do about this for normal UIKit content
However, for Open GL ES or Metal content, you are able to opt-out of the sampling that the device does, and render straight into the device's physical coordinates - allowing for pixel perfect drawing.
In a graphics app that uses Metal or OpenGL ES, content can be easily rendered at the precise dimensions of the display without requiring an additional sampling stage. This is critical in high-performance 3D apps that perform many calculations for each rendered pixel. Instead, create buffers to render into that are the exact resolution of the display.
Open GL ES
Set the contentsScale of the CAEAGLayer to the [UIScreen mainScreen].bounds.nativeScale, or use a GLKView which will automatically do this.
You will then want to create your framebuffer with the size of the device's physical coordinates.
Metal
Set the contentsScale of your CAMetalLayer to the [UIScreen mainScreen].bounds.nativeScale, or use an MTKView which will automatically do this.
You will also want to adjust the drawable size to account for the scale (lifted from the docs):
CGSize drawableSize = self.bounds.size;
drawableSize.width *= self.contentScaleFactor;
drawableSize.height *= self.contentScaleFactor;
metalLayer.drawableSize = drawableSize;
See also this interesting blog post on how the iPhone 6 Plus renders content, plus the follow-up post specifically about Display Zoom.

Core Video texture cache and minification issue

I use Core Video texture cache for my OpenGL textures. I have an issue with rendering such textures in case of minification. Parameter GL_TEXTURE_MIN_FILTER has no effect: interpolation for minification is always the same as GL_TEXTURE_MAG_FILTER. The interesting fact is that everything works ok when I create pixel buffer object with CVPixelBufferCreateWithBytes function. The problem appears when I use CVPixelBufferCreate.
Environment:
iOS 7
OpenGL ES 2.0
iPad mini, iPad 3, iPad 4.
I've developed simple application which demonstrates this issue: https://github.com/Gubarev/iOS-CVTextureCache. The Demo application can render checkerboard (size of cell is 1x1) texture in three modes:
Regular OpenGL texture (ok).
Core Video texture, pixel buffer created with CVPixelBufferCreate (problem).
Core Video texture, pixel buffer created with CVPixelBufferCreateWithBytes (ok).
Texture is rendered two times with slight minification (achieved by using OpenGL viewport smaller than texture):
Left image rendered with minification filter GL_NEAREST, magnification filter GL_NEAREST.
Right image rendered with minification filter GL_LINEAR, magnification filter GL_NEAREST.
Image below demonstrates proper minificiation in case of regular OpenGL texture. It's clearly visible that setting for minificiation filter takes effect. Same results could be obtained when "CVPixelBufferCreateWithBytes" approach is used. The problem appear in case of "CVPixelBufferCreate" approach: both images minificated with setting for magnification filer (GL_NEAREST in particular).

Tile size, retina vs non-retina, iPad vs iPhone

Kind of a fun question. I am hoping that is generates a lot of good thinking.
I am in the design, early alpha stage of my current orthogonal game project. I am experimenting with different tile sizes. A few questions as I would like to step off on the right foot.
Should we differentiate tile size (80x80, 32x32 etc) on retina vs. non retina displays, iPhone vs iPad displays?
What is a good recommended tile size that accommodates both the designer and the artist... and why?
Goals:
I would like to a clean, crisp visuals no matter the display format. Cartoony, colorful 16bit to 32bit images no matter the display.
I would like to keep to a 1024x1024 texture size for my atlas. I am hoping this will give me enough tiles to make the world look good and not crush my tile map system.
My current map size is 200 tiles wide x 120 tiles high. The map will be a a low detail (nautically focused) mercator projection of Earth.
Thanks in advance for all the good advice.
E
I usually try to make my games for iPad screen aspect where I'm making sure that the important elements are in a smaller Safe Zone. And the UI can be anchored on specified distance from the edges. Then for iPhone screen aspect I crop a small portion of the screen and layout the UI accordingly.
So if you are working in landscape here are the sizes you need to support:
480x320 - iPhone (0.5)
960x640 - iPhone Retina (1)
1024x768 - iPad (1)
2048x1736 - iPad Retina (2)
The number in brackets indicate the scale. I just like picking iPad (1024x768) for my ingame units. At this point I have all textures in 3 sizes, since I'm using OpenGL I use different mipmaps for each resolution I need. My texture loading function can skip mipmap levels so that on devices that I don't need high res I safe memory and loading time.
Depends if you need to click on individual tiles. In case you need to I'll suggest using 64x64 on iPhone (480x320) 256x256 on iPad Retina (2048x1736). Having all your art in power of 2 is always good. If the size is too large then consider 48x48 for iPhone and 192x192 for iPad Retina. If your game requires you can have smaller tiles but consider having larger active zone around the entities that you have to click (hopefully not every tile will be clickable).
I faced a similar issue a while ago and realized I was tackling the problem from the wrong angle.
You first need to consider the average finger/thumb size of the user and determine how many pixels/points consume that space.
From there you can derive the non-Retina Display pixel units and Retina Display point units to use.
N.B. that a game that might play well on the iPad might not work on the iPhone if the user's fingers obscure the view.

Rendering large textures on iOS OpenGL

I'm developing an iPad 2 app which will overlay panoramic views on top of physical space using Cinder.
The panorama images are about 12900x4000 pixels; they are being loaded from the web.
Right now the line to load the image is:
mGhostTexture = gl::Texture( loadImage( loadUrl( "XXX.jpg" ) ) );
Works fine for small images (e.g. 500x500). Not so well for full images (the rendered texture becomes a large white box).
I assume I'm hitting a size limit. Does anyone know a way to render or split up large images in openGL and/or Cinder?
for OpenGL ES 2.0:
"The maximum 2D or cube map texture size is 2048 x 2048. This is also the maximum renderbuffer size and viewport size."1
also, seems a solution may be present here:
Using libpng to "split" an image into segments

Resources