Access iPhone Absolute Pixel Position - ios

In the screenspace of an iPhone/iPad, Apple uses points, which are typically half the actual resolution of the screen. My question is, is it possible to access the actual pixels themselves? For example, if i take a UIView, and make it have a size/location of 0,0,0.5,0.5 with a background color of red, i can't see it on the screen.
Just wondering if this is possible.
Thanks!

Sure it's possible.
The code you already have should be working (a UIView with a size of (0.5, 0.5)). I just ran it and captured this result from the simulator:
Yea. That's difficult to see. Let's zoom that in.
So yes, you can draw on-screen in smaller values than a single point.
However, to draw a single pixel, you'll want to be using a point value that is 1/scaleOfScreen (as not all devices have 2x displays). So, for example, you'll want your code to look something like this:
CGFloat scale = [UIScreen mainScreen].scale;
CGFloat pixelPointWidth = 1/scale;
UIView* v = [[UIView alloc] initWithFrame:CGRectMake(20, 20, pixelPointWidth, pixelPointWidth)];
v.backgroundColor = [UIColor redColor];
[self.view addSubview:v];
This will now create a UIView that occupies a single pixel on-screen.
Although, if you want to be doing a lot of pixel-perfect drawing, you should probably be using something lower level than a single UIView (have a look at Core Graphics).
However.
You may encounter some issues with this method when drawing on an iPhone 6 Plus. Because it's screen's scale differs from its nativeScale, it will first render your content in the logical coordinate space of 3x and then downsample to the actual screen resolution (around 2.6x).
This will most probably result in some pixel bleeding, where your 'pixel' view can be rendered in neighboring pixels (although usually at a reduced brightness).
Unfortunately, there is no easy way around this problem without using an even lower level API such as OpenGL or Metal, where you can circumvent this automatic scaling and then downsampling, and draw directly into the screen's actual coordinate space.
Have a look here for a nice little overview on how different devices render content onto their screens.
Have a look here for more info on how pixel bleeding can occur on the iPhone 6 Plus.

You can guess the pixels based on the point depending on the device resolution (in ppi) by multiplying a coefficient but you don't want to do this.
Also, in your exemple you did not state that you normalized the coordinates so basically you are trying to display a red box at the first pixel (top left) with a size of half a point, which is why you can't see it.
EDIT
To draw a red box you can use this sample code :
// Draw a red box
[[UIColor redColor] set];
UIRectFill(CGRectMake(20, 20, 100, 100)); // position (x : 20, y: 20) (still top left) and size (100*100 points)

Related

Autolayout and concentrically smaller nested UIImageViews

I have a photoshop file with 8 concentric 'rings' (although some aren't rings and are more irregular), with the largest at the 'back' and decreasing in size up to the 8th one being very small in the centre.
The 'rings' are drawn in such a way as that the smaller ones are 'internal' to its 'outer' or next larger ring. Each 'ring' has transparency on its outside, but also on its inside (where the smaller rings would 'sit').
I need to support all iOS devices (Universal App).
The largest image has a default size of 2048x2048 pixels, but every one of the 8 layers has a common 'centre' point around which they need to rotate, and around which they need to be fixed.
So, basically, all 8 have to be layered, one on top of the other, such that their centres are all perfectly aligned.
BUT the size of the artwork is larger than any iOS device, and the auto-layout has to allow for every device size and orientation, with the largest (rear) layer having an 8 point inset from the screen edges.
For those that can't picture this, here is a crude representation, where the dark background is 'transparent' and represents the smaller of the width or height of the iOS device (depending on orientation):
Note: The placement of where each smaller UIImageView is precise. They all share a common centre (the centre of the screen) but each ring sits 'inside' of the larger ring behind it. i.e. the centre of the green, hot pink and baby pink circles are empty / transparent, and no matter what size screen or orientations, they have to nest together perfectly, as they do in the photoshop art assets.
I've spent hours in auto-layout trying to sort this out, and when I've got it working on one device and both orientations, it's not working on any others.
No code to show because I'm trying to do this in IB so I can preview on all devices.
Is this even possible using IB / Auto-Layout or do I have to switch to manually working out the scales by which to resize their UIImageView based on screen width / height at runtime and the relationship between each 'ring'?
Edit:
And unless I'm doing it wrong, embedding each UIImageView into a transparent UIView in order to use the UIView to fake 'insets', this doesn't work because those numbers are hard coded, such that when it's perfect on a 12.9" iPad Pro, on an iPhone SE each 'inset' UIImageView is much more compressed and doesn't sit 'inside' it's next larger ring, but is like a tiny letter O with lots of surrounding blank space, because those 'insets' don't scale. What is 100pts on an iPad is a tiny amount of space, but 100pts on an iPhone SE is a 1/3 of the screen.
You can draw circles using CAShapeLayer and UIBezierPath. Since you are trying to fit this in a square, I'd define container size to be either the width or height of the parent container depending on what's smaller, this will allow for rotation and different screen sizes. As for the center, you can always find it by getting center coordinates of your square container (container.bounds.size.width / 2). To rotate your layers/sublayers you can use this answer: https://stackoverflow.com/a/3929703/4577610

Tool or technique for examining retina pixels iOS?

I am working on an iOS app that requires very precise drawing and would like to have some way of visually inspecting what, exactly, is being drawn to each (physical) pixel on my iOS device screen. This would be similar to the Pixie app dev tool on OS X, but for iOS -- instead of simply blowing up and anti-aliasing the screen, it would show a very clear grid of each and every pixel, and what shades/colors are being drawn to those pixels.
Does anyone know of such a tool or technique?
Here's a screenshot from Pixie on OS X on my Retina MacBook that shows the kind of output I'm looking for. You can clearly see, for example, that the designers specified 1 point (which spans two retina pixels) for the "minus" sign in the yellow minimize icon.
Assuming that you are using Quartz to do your drawing to a UIView you can draw on pixel boundaries and not on point boundaries by using CGContextScaleCTM. Here is a rough outline of what you would do to do this with a screen shot of your app. You could also have the user take a screen shot of a different app and then import it into yours.
-(void)drawRect:(CGRect)rect
{
UIView* rootView = <GET_YOUR_ROOT_VIEW>;
//You will probably want to change rect so you don't get distortion
//This assumes that this view is the same size as the screen
[rootView drawViewHierarchyInRect:CGRectMake(0,0,rect.size.width*8, rect.size.height*8, afterScreenUpdates:YES];
CGContextRef cxt = UIGraphicsGetCurrentContext();
//Assumes this is #2x retina. You should check the contentScale to be sure
//and change accordingly
CGContextScaleCTM(ctx, 0.5, 0.5);
//Since we made the screen shot be 8x bigger the pixels
//on an #2x device are in increments of 4
for (int x = 0; x < rect.size.width*8; x+=4)
{
//Draw a line at x to ctx
}
for (int y = 0; y < rect.size.height*8; y+=4)
{
//Draw a line at y to ctx
}
}
I am sorry that I don't have time to actually write and test this code myself, so there is probably a few little issues with it. But this should get you going in the right direction.
Also, since you are blowing up the image you actually don't need to change the scale with CGContextScaleCTM, you just need to draw your lines at the right intervals.

PDF vector images in iOS. Why does having a smaller image result in jagged edges?

I want to use pdf vector images in my app, I don't totally understand how it works though. I understand that a PDF file can be resized to any size and it will retain quality. I have a very large PDF image (a cartoon/sticker for a chat app) and it looks perfectly smooth at a medium size on screen. If I start to go smaller though, say thumbnail size the black outline starts to look jagged. Why does this happen? I thought the images could be resized without quality loss. Any help would be appreciated.
Thanks
I had a similar issue when programatically changing the UIImageView's centre.
The result of this can lead to pixel misalignment of your view. I.e. the x or y of the frame's origin (or width or height of the frame's size) may lie on a non integral value, such as x = 10.5, where it will display correctly if x = 10.
Rendering views positioned a fraction into a full pixel will result with jagged lines, I think its related to aliasing.
Therefore wrap the CGRect of the frame with CGRectIntegral() to convert your frame's origin and size values to integers.
Example (Swift):
imageView?.frame = CGRectIntegral(CGRectMake(10, 10, 100, 100))
See the Apple documentation https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CGGeometry/#//apple_ref/c/func/CGRectIntegral

What does CALayer.contentsScale mean?

I'm reading this tutorial, iOS 7 Blur Effects with GPUImage. I have read the document, this variable means x px / y pt. But I don't get this line of code.
_blurView.layer.contentsScale = (MENUSIZE / 320.0f) * 2;
What's the logic behind this line? How should I determine the contentsScale in my code?
If I don't set the contentsScale, which is default to 2.0, the screen looks like:
But after I set it to (MENUSIZE / 320.0f) * 2, the screen is:
This is strange because the contentsScale decreased but the image grow bigger. MENUSIZE is 150.0f.
contentsScale determines the size of the backing store bitmap, so that the bitmap will work on both nonretina and retina screens.
Let's say you make a layer (CALayer) into which you intend to draw. Lets say its size is 100x100. Then to make this layer look good on a double-resolution screen, you will want its contentsScale to be 2.0. This means that behind the scenes the bitmap is 200x200. But it is transformed so that you still treat it as 100x100 when you draw into it; you think in points, just as you normally would, and the backing store is scaled to match the doubled pixels of a retina device.
In most cases you don't have to worry about this because if a layer is the main layer of a view, its contentSize is set automatically for the current device. But if you create a layer yourself, in code, out of whole cloth, then setting its contentsScale based on the scale of the main UIScreen is up to you.

creating UIButton programmatically is blurry than using storyboard

I have two custom UIButton with the same image.One is created programmatically which is blurry,the other is using storyboard which works fine.Here is the my code
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
UIButton *purchaseButton=[UIButton buttonWithType:UIButtonTypeCustom];
purchaseButton=[UIButton buttonWithType:UIButtonTypeCustom];
purchaseButton.frame=CGRectMake(0, 35.5+30, 177, 55);
[purchaseButton setImage:[UIImage imageNamed:#"GouMai1.png"] forState:UIControlStateNormal];
[self.view addSubview:purchaseButton];
}
Here is the project download link(Because of GFW,I can only upload it to a Chinese website). Is that a Xcode bug?
purchaseButton.frame=CGRectMake(0, 35.5+30, 177, 55);
There's your problem. Never give an interface object non-integral coordinates. It will be blurry! - You'll notice that in Interface Builder (storyboard) you can't do that.
The reason is that on the screen there are physical pixels, and there is no such thing as half a pixel: every pixel is either on or off, as it were. So if you use half-point coordinates, they cannot match pixels exactly, and the system will blur things to match (antialiasing etc.).
So, get rid of the .5 and things will be much, much better.
Contrary to what #matt is saying, you can do .5 now so long as it's a retina device. You can even have .5 widths and heights, that's how iOS 7 gets their very thin lines in some places.
When I run your sample app, in the Retina iPad simulator it is not blurry (well no more blurry than the source image). But in the non-Retina iPad simulator it is blurry, and this is because of the .5 in the frame.
You need to test the scale of the screen [UIScreen mainScreen].scale and add the .5 only if the scale is > 1.
Also note, you are creating two buttons with that code. You can remove the duplicate line purchaseButton=[UIButton buttonWithType:UIButtonTypeCustom];
purchaseButton.frame=CGRectMake(0, 35.5+30, 177, 55);
Actually
Screen will have pixels and there wont be half pixel
The coordinates we specify for UI Elements will be points.
Point to pixel ratio will depend on dpi and ppi.
IOS primarily renders all things based on points and dpi and thus trying to use half point is causing blur. (Thats why we cant set the coordinates to points in interface builder)
So no floats are allowed forcing them causes blurness. please use nearest possible integer value.
On other hand since dividing even numbers with odd numbers will mostly give float point coefficients. we must use ceil and floor methods when performing calculations dynamically.
purchaseButton.frame=CGRectMake(0, ceil(self.view.fram.size.width/3), 177, 55);

Resources