I'm building an iOS-based core text application (iOS 5, iPad), and have a view within which I'm performing some typographic calculations (in drawRect).
I've noticed that when I use CGContextConvertRectToDeviceSpace, the result I get on a retina device is double that of a non-retina device. It's my understanding that within drawRect there is an implicit transform applied that should hide any retina/non-retina device differences, so I'm confused why I should see any difference at all.
More specifically, I'm trying to calculate the location in user coords of a given CTLine in user space coords. I first use CTLineGetTypographicBounds for my line, and use those values in CGContextConvertRectToDeviceSpace to get a rect... but the rect I get back has double the width and height on a retina device versus a non-retina one. Is there something I'm missing?
I've noticed that when I use CGContextConvertRectToDeviceSpace, the result I get on a retina device is double that of a non-retina device.
Sounds normal to me. "Device space" represents the actual pixels on the output device, which in this case is the retina display. It's normally scaled 2x larger than the CGContext's user space coordinate system.
Normally it's rare to need to convert to or from device space -- you would only do it if you absolutely positively needed to align your drawing to real pixels.
What are you trying to accomplish by using CGContextConvertRectToDeviceSpace? There may be an alternative.
Related
I'm working on a project which particularly in this ViewController, I have set up a scale as a UIImageView and it needs to react on touch events based on that scale and to produce some output. However, my question is, how to implement accuracy for all of the different devices? I was thinking to try with if else statements for every single device (iPhone 4,5,6) and assign the properties based on those conditions. But that would be like dirty coding, or no? Is there any other method for this type of functionality? I need you to give me some opinions or tips just to put me on the right track. Thanks
You can use UIScreen's scale property to determine if the device has a retina screen (#2x or #3x), which will help to some extent. At present, every iPhone has the same number of points per inch (163), with differing numbers of pixels per inch depending on the device. The iPads are a different matter, though, as the iPad Mini has the same point density as an iPhone, but the 9.7" iPad and the iPad Pro have a lower density. I think for the iPads you'll need to detect the specific device to be able to figure out the physical screen size.
I have an app targeting iPad2 (non-retina 1024x768 display). I don't explicitly enable retina mode, my scale-factor is set to 1 but touch events are reporting coordinates in retina mode i.e. the centre of the screen is (1024,768) and the corner is (2047,1535).
I thought the whole point was iPad apps would automatically work in non-retina mode unless you explicitly enable it by changing scale-factor.
I'm using a library which does some of the UIView creation, how can I obtain the main UIView and query it to see what's happening?
For retina testing, I am relying on the simulator only - I have 6.1. However another developer confirmed it wasn't responding to touches on his iPad3 device so I'm sure it's not a simulator problem.
I had opposite situation in my glkview app. I used screen->scale property to translate tap coords. So what you have is (as I suspect) is one of the following: (a) in your story board there is some property set for your view (inspect them all) (b) your device provides such layout (what iPad btw)?
Will later post more details.
There are some tech mistakes, perhaps You do not read carefully Apple documentation.
1) You cannot enable/disaber retina: retina is in HW and iOS can use it correctly, You can only use it. (in some circumstances you can adapt your code to device)
2) scale factor rarely should be used (read apple doscs) and only in specialized code related to custom drawing
3) in Apple vision generally You should NOT behave different in retina HW and non-retina.
4) Pixels and coords are LOGICAL coords, so bottom-ritght pixel is always at y= 1024 x=768.
4) You cannot have 2048 pixel resolution on a iPad2.
I don't know which library are You using, but STD behavior is different.
I'm using a full screen UIWebView to house/render an HTML5 application under iOS. Very much like Cordova/phonegap although I'm not using those. I try to make my webview size match the underlying display. Unfortunately, on retina displays, the value returned by self.view.bounds relies on the UIScreen scaling factor. So I get 1024x768 instead of 2048x1536. Not a huge problem except I then instantiate the UIWebView using the smaller bounds (after adjusting for the status bar) and some things get a bit jaggy. In particular, I use canvas at a couple of points and also rounded borders appear thick. To be clear, this isn't a case of scaled raster resources.
I'm not getting the full resolution of the screen using that UIWebView. Ideally, I'd like to set the screen scale to 1.0 but that doesn't appear to be supported. Any suggestions on how best to get full advantage of the screen?
The problem is most noticeable on the iPhone 5 simulator. I don't have that hardware to test on. iPad/new iPad I think has the problem but the jaggies aren't as noticeable.
Update: The more I look at this, the more I think it may be restricted to canvas.
Solution: In case anyone else gets here. Based on the answer below, I created all of my canvas elements with width and height multiplied by window.devicePixelRatio and then set their style attribute to have the original (device independent) size.
See https://stackoverflow.com/a/7736803/341994. Basically you need to detect that you've got double resolution and effectively double the resolution of the canvas.
I am testing my app in the iphone simulator To test the retina display, I've set the hardware to iphone(retina)
Unfortunately, the entire scene seems to be scaled to four times its normal size!
The only thing I see, is the left bottom quarter of the entire scene.
The app, since it exceeds the bounds of the screen, shows up only as a quarter on the iphone screen.
I am using Cocos2d. What is the cause of this? I also have retina display enabled in the app delegate. Any help is greatly appreciated!
The simulator is increasing to four times it's normal size (twice on each axis) because by default it uses a 1:1 mapping of pixels.
In other words, one screen pixel = one device pixel. So when you go to Retina display, which doubles the retina density, you need four times as much space to display the device screen.
Edit:
In response to the updated question, you can use the 'scale' feature. Window->scale->50% (or command+3).
When programming for the iPad, font (and other) sizes are specified in "points." I have seen reference to a point as a pixel that is independent of screen resolution. But I am having trouble finding definite confirmation of how big a point is in real terms (that is, in terms of inches). Is a point equal to one pixel on the standard iPad screen, so 1pt = 1/132in? And then, to confirm, this means that an "iOS point" is a different unit than the printer's point = 1/72in?
Thanks.
See here and here (scroll down to points vs. pixels) for the official word. Basically, a point is one pixel on a non-retina device (so the size varies between the iPad and the iPhone - it isn't related to a printer's point) and 2 pixels on a retina device (which has twice the number of pixels in each direction).
Drawing and positioning is done in points to allow the same code to run on both types of device - the frameworks will fill in the gaps to make drawing smoother on retina devices.
An iPad point is different to an iPhone point, which is different to a printers point, to answer your question.