Possible Retina issue with OpenGL ES on iPhone? - ios

This is probably linked to another unsolved mystery of mine.
I'm drawing Orthographic 2d on iPhone, using real device and simulator. I'm trying to color my pixels a given color depending on how far they are from arbitrary point in pixelspace of 'A', which I pass in (hard code). I'm doing everything in Retina 960x640 resolution. I calculate distance from A to gl_FragCoord, and I color based on leaping between 2 colors with the 'max' being 300px distance.
When on simulator (with retina display) I need to give a center point of "460" pixels for screen midpoint X.. Y I give 160px, and I look for distance of '300'px.. to get the same effect on device I need center of 960X and distance of 150 to get the same results (interestingly, px 80 doesn't give the same results I want but 160 could be an overshoot on the original...)
Obviously a retina issue is at play. But where, and how, and how do I find and fix it?
I'm using:
glViewport(0, 0, 960.0f, 640.0f);
and:
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &framebufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &framebufferHeight);
And:
[self setView:[[EAGLView alloc] initWithFrame:[UIScreen mainScreen].bounds]];
[(EAGLView *)[self view] setContentScaleFactor:2.0f];

You shouldn't hardcode "glViewport(0, 0, 960.0f, 640.0f);", setup the viewport this way:
glViewport(0, 0, framebufferWidth, framebufferHeight);
Also don't hardcode the content scale, you can findout the content scale with:
float contentScale = 1.0f;
if ([[UIScreen mainScreen] respondsToSelector:#selector(displayLinkWithTarget:selector:)]) {
contentScale = [[UIScreen mainScreen] scale];
}
About the pixel distance, since you want the distance to be the double with retina display, you can add an uniform to your shader with the content scale.

Now iOS devices can have multiple different screens, and a UIWindow can be placed in a non-main screen. In that case, you can use self.view.window.screen.scale to get current screen's scale dynamically.

Related

Get screen dimension (per scanline) in pixels for iPhone X (10) ios 11

Considering that iPhone X Notch has smooth curves (no rectangle around notch), How to find if a pixel (x,y) on captured screenshot is part of draw buffer covered by the notch (vs actual display buffer) on iPhone X?
I am trying to log start X and end Y (pixel) position for actual pixels drawn for each of the horizontal scan lines when iPhone X is in landscape mode.
Is there a way to use inbuilt api to get this information? [[UIScreen mainScreen] nativeBounds] won't suffice in this case.
Hardcoding values or proving an image mask to detect it is possible. I am wondering if there is a more robust way to find width in pixel for scan line A/B/C in attached pic.
// For a given scan-line for a given Y = any-value between 0 & window.screen.height (Landscape orientation)
+ (CGPoint)printScanlineForY:(NSUInteger )y {
// what goes here?
...
// Case 1: expected for non-notch scan-line, return CGPointMake(0, screen-width)
// Case 2: expected for notch affected scan-line, return (non-0, screen-width)
// non-0 as determined by notch draw curve.
}
// Returning nativeBounds CGRect would return larger bounding box for entire screen.
// Output on iPhone X simulator: Native Screen Bounds: {{0, 0}, {1125, 2436}}
+ (CGRect)printScreenBounds {
CGRect bounds = [[UIScreen mainScreen] nativeBounds];
NSLog(#"Native Screen Bounds: %#", NSStringFromCGRect(bounds));
return bounds;
}
Related question with helpful details: Detect if the device is iPhone X
You can reference the documentation here which has some nice images
https://developer.apple.com/ios/human-interface-guidelines/overview/iphone-x/
If you align your views to layout margins the system should take care of putting your views in visible areas.
If you want to figure out exactly where the notch is you can empirically calculate it by creating a view taking the full screen bounds, and doing a hit hit test for each coordinate , checking if it is "touchable". Collect the coordinates to some sort of array and then use them for future reference.
For instance something like:
#implementation FindNotch
- (nullable UIView *)hitTest:(CGPoint)point withEvent:(nullable UIEvent *)event; // recursively calls -pointInside:withEvent:. point is in the receiver's coordinate system
{
UIView *hitView = [super hitTest...];
if (view == null) { ... } // the notch isn't "touchable"
}
#end
FindNotch *notch = [FindNotch alloc] initWithFrame:[UIScreen mainScreen].bound];
static UIWindow *window; window = [[UIWindow alloc] initWithFrame:UIScreen.mainScreen.bounds;]
window.windowLevel = UIWindowLevelStatusBar + 1; // make sure the view is above anything else in the notch area. Might require some tweaking
Notice that it isn't tested and you may need to raise the window level above the status bar or else it may be "underneath" and not respond to touches

iOS OpenGL: Why has scaling my renderbuffer for retina shrunk my image, and how do I fix it?

I'm working on an augmented reality project using a Retina iPad but the two layers - the camera feed and the OpenGL overlay - are not making use of the high resolution screen. The camera feed is being drawn to a texture, which appears to be being scaled and sampled, where as the overlay is using the blocky 4 pixels scale up:
I have looked through a bunch of questions and added the following lines to my EAGLView class.
To initWithCoder, before calling setupFrameBuffer and setupRenderBuffer:
self.contentScaleFactor = [[UIScreen mainScreen] scale];
and in setupFrameBuffer
float screenScale = [[UIScreen mainScreen] scale];
float width = self.frame.size.width;
float height = self.frame.size.height;
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width * screenScale, height * screenScale);
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width*screenScale, height*screenScale, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
the last two lines simply being modified to include the scale factor.
Running this code gives me the following results:
As you can see, the image now only fills the lower left quarter of the screen, but I can confirm the image is only scaled, not cropped. Can anyone help me work out why this is?
Its not actually being scaled, it is that you are actually drawing the frames to a defined size that you defined prior to allowing the render buffer to be 2X the size in both directions.
Most likely what is going on is you defined your sizing in terms of pixels rather than the more general OpenGL coordinate space which moves from -1 to 1 in both the x and y directions (this is really when you are working in 2D, as you are).
Also, calling:
float width = self.frame.size.width;
float height = self.frame.size.height;
will return a size that is NOT the retina size. If you NSLog those out, you will see that even on a retina based device, those return values based on non-retina based screens, or more generally a movement unit, not a pixel.
The way I have chosen to obtain the view's actual size in pixels is:
GLint myWidth = 0;
GLint myHeight = 0;
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &myWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &myHeight);
In iOS, I have been using the below code as my setup:
-(void)setupView:(GLView*)theView{
const GLfloat zNear = 0.00, zFar = 1000.0, fieldOfView = 45.0;
GLfloat size;
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0);
//CGRect rect = theView.bounds;
GLint width, height;
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &width);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &height);
// NSLog(#"setupView rect width = %d, height = %d", width, height);
glFrustumf(-size, size, -size / ((float)width / (float)height), size /
((float)width / (float)height), zNear, zFar);
glViewport(0, 0, width, height);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
The above routine is used within code I am testing on both retina and non-retina setups, and is working just fine. This setupView routine is an overridable within a viewController.
Here's the solution I found.
I found out that in my draw function I was setting the glViewport sizes based off the glview.bounds.size, which is in points rather than pixels. Switching that to the be set based off the framebufferWidth and framebufferHeight solved the problem.
glViewport(0, 0, m_glview.framebufferWidth, m_glview.framebufferHeight);

Retina issue: how to make UIView to have width exactly 1 of pixel instead of 2 pixels?

I know we are operating on points not pixels and in most cases it's convenient, but I need to make UIView be 1 pixel instead of 2 pixel height. So, if you drag and drop some UIView (separator line) in Interaface builder, and make it the height of 1px (point) then it will still look like 2 pixel size line on retina screen (both on device and simulator).
I know there contentScaleFactor property on the view which show is it retina (2.0f) or not (1.0f). It looks like the views has the value of 1.0f, so you need to retrieve that from main screen:
[UIScreen mainScreen].scale;
This returns me 2.0f. Now, I'v added height constraint for this separator view added the method which checks isRetina and divides the line to make it exactly 1 pixel:
- (void)awakeFromNib{
[super awakeFromNib];
CGFloat isRetina = ([UIScreen mainScreen].scale == 2.0f) ? YES : NO;
if (isRetina) {
self.separatorViewHeightConstraint.constant /= 2;
}
}
This works, I'm just not sure is it good idea to use 0.5 value ...
To support newer 3x displays (e.g. iPhone 6+) use this code:
UIScreen* mainScreen = [UIScreen mainScreen];
CGFloat onePixel = 1.0 / mainScreen.scale;
if ([mainScreen respondsToSelector:#selector(nativeScale)])
onePixel = 1.0 / mainScreen.nativeScale;
Your code is valid. Using 0.5 to set the frame of a UIView will work as desired, as the frame's arguments are CGFloat's. If you wish to use a CGFloat representing a single pixel in point units for something other than self.separatorViewHeightConstraint.constant, the code below will work.
CGFloat scaleOfMainScreen = [UIScreen mainScreen].scale;
CGFloat alwaysOnePixelInPointUnits = 1.0/scaleOfMainScreen;
You could just do
self.separatorViewHeightConstraint.constant = self.separatorViewHeightConstraint.constant / [UIScreen mainScreen].scale;
yes setting the value to 0.5 is the only way to get "real" 1px lines on retina
Sadly none of the other answers apply for iPhone 6 Plus.
1px lines are not possible on iPhone 6 Plus, as the screen is rendered in 1242x2208 and then down sampled to 1080x1920. Sometimes you will get an almost perfect 1px line, and sometimes the line will disappear completely.
See http://www.paintcodeapp.com/news/iphone-6-screens-demystified for a proper explanation.

AVFoundation photo size and rotation

I'm having a nightmare time trying to correct a photo taken with AVFoundation captureStillImageAsynchronouslyFromConnection to size and orient to exactly what is shown on the screen.
I show the AVCaptureVideoPreviewLayer with this code to make sure it displays the correct way up at all rotations:
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
previewLayer.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
if ([[previewLayer connection] isVideoOrientationSupported])
{
[[previewLayer connection] setVideoOrientation:(AVCaptureVideoOrientation)[UIApplication sharedApplication].statusBarOrientation];
}
[self.view.layer insertSublayer:previewLayer atIndex:0];
Now when I have a returned image it needs cropping as it's much bigger than what was displayed.
I know there are loads of UIImage cropping examples, but the first hurdle I seem to have is finding the correct CGRect to use. When I simply crop to self.view.frame the image is cropped at the wrong location.
The preview is using AVLayerVideoGravityResizeAspectFill and I have my UIImageView also set to AspectFill
So how can I get the correct frame that AVFoundation is displaying on screen from the preview layer?
EDIT ----
Here's an example of the problem i'm facing. Using the front camera of an iPad Mini, the camera using the resolution 720x1280 but the display is 768x0124. The view displays this (See the dado rail at the top of the image:
Then when I take the image and display it, it looks like this:
Obviously the camera display was centred in the view, but the cropped image is taken from the top(none seen) section of the photo.
I'm working on a similar project right now and thought I might be able to help, if you haven't already figured this out.
the first hurdle I seem to have is finding the correct CGRect to use. When I simply crop to self.view.frame the image is cropped at the wrong location.
Let's say your image is 720x1280 and you want your image to be cropped to the rectangle of your display, which is a CGRect of size 768x1024. You can't just pass a rectangle of size 768x1024. First, your image isn't 768 pixels wide. Second, you need to specify the placement of that rectangle with respects to the image (i.e. by specifying the rectangle's origin point). In your example, self.view.frame is a CGRect that has an origin of (0, 0). That's why it's always cropping from the top of your image rather than from the center.
Calculating the cropping rectangle is a bit tricky because you have a few different coordinate systems.
You've got your view controller's view, which has...
...a video preview layer as a sublayer, which is displaying an aspect-filled image, but...
...the AVCaptureOutput returns a UIImage that not only has a different width/height than the video preview, but it also has a different aspect ratio.
So because your preview layer is displaying a centered and cropped preview image (i.e. aspect fill), what you basically want to find is the CGRect that:
Has the same display ratio as self.view.bounds
Has the same smaller dimension size as the smaller dimension of the UIImage (i.e. aspect fit)
Is centered in the UIImage
So something like this:
// Determine the width:height ratio of the crop rect, based on self.bounds
CGFloat widthToHeightRatio = self.bounds.size.width / self.bounds.size.height;
CGRect cropRect;
// Set the crop rect's smaller dimension to match the image's smaller dimension, and
// scale its other dimension according to the width:height ratio.
if (image.size.width < image.size.height) {
cropRect.size.width = image.size.width;
cropRect.size.height = cropRect.size.width / widthToHeightRatio;
} else {
cropRect.size.width = image.size.height * widthToHeightRatio;
cropRect.size.height = image.size.height;
}
// Center the rect in the longer dimension
if (cropRect.size.width < cropRect.size.height) {
cropRect.origin.x = 0;
cropRect.origin.y = (image.size.height - cropRect.size.height) / 2.0;
} else {
cropRect.origin.x = (image.size.width - cropRect.size.width) / 2.0;
cropRect.origin.y = 0;
}
So finally, to go back to your original example where the image is 720x1280, and you want your image to be cropped to the rectangle of your display which is 768x1024, you will end up with a CGRect of size 720x960, with an origin of x = 0, y = 1280-960/2 = 160.

CGRect coordinates in non-retina form when using SetFrame

I have a ViewController to which I applied the retina 3.5" form factor in the story board. The iOS iPhone 6.1 simulator also has the Retina configured.
When I try to position a UIImageView using SetFrame, its CGRect coordinates are in the non-retina form (i.e. when I position to 320x480, it goes to the bottom right instead of the middle of the screen):
[myImageView setFrame:CGRectMake(320, 480, myImageView.frame.size.width, myImageView.frame.size.height)];
[self.view addSubview:myImageView];
How to have CGRect coordinates for Retina when using SetFrame ?
Thanks.
The reason for this is because iOS uses points instead of pixels. This way, the same code will work on a retina and a non-retina screen. Therefore, when you set the location to (320,480) you are setting it to point (320,480) not pixel (320,480). This way, if the phone is non-retina, that point will end up being pixel (320, 480) and on retina, it will end up being pixel (640,960).
So what it looks like you want is:
[myImageView setFrame:CGRectMake(160, 240, myImageView.frame.size.width, myImageView.frame.size.height)];
[self.view addSubview:myImageView];
which will place the imageView's top-left corner in the same location on both retina and normal display.
To center a view:
CGFloat x = self.view.frame.size.width;
CGFloat h = self.view.frame.size.height;
[myImageView setCenter:CGPointMake(w/2, h/2)];
...
[self.view addSubview:myImageView];
CGRectMake needs a x,y,width,height.. X,y are the topleft of the view, so use 0,0,w,h for full frame views.

Resources