Tool or technique for examining retina pixels iOS? - ios

I am working on an iOS app that requires very precise drawing and would like to have some way of visually inspecting what, exactly, is being drawn to each (physical) pixel on my iOS device screen. This would be similar to the Pixie app dev tool on OS X, but for iOS -- instead of simply blowing up and anti-aliasing the screen, it would show a very clear grid of each and every pixel, and what shades/colors are being drawn to those pixels.
Does anyone know of such a tool or technique?
Here's a screenshot from Pixie on OS X on my Retina MacBook that shows the kind of output I'm looking for. You can clearly see, for example, that the designers specified 1 point (which spans two retina pixels) for the "minus" sign in the yellow minimize icon.

Assuming that you are using Quartz to do your drawing to a UIView you can draw on pixel boundaries and not on point boundaries by using CGContextScaleCTM. Here is a rough outline of what you would do to do this with a screen shot of your app. You could also have the user take a screen shot of a different app and then import it into yours.
-(void)drawRect:(CGRect)rect
{
UIView* rootView = <GET_YOUR_ROOT_VIEW>;
//You will probably want to change rect so you don't get distortion
//This assumes that this view is the same size as the screen
[rootView drawViewHierarchyInRect:CGRectMake(0,0,rect.size.width*8, rect.size.height*8, afterScreenUpdates:YES];
CGContextRef cxt = UIGraphicsGetCurrentContext();
//Assumes this is #2x retina. You should check the contentScale to be sure
//and change accordingly
CGContextScaleCTM(ctx, 0.5, 0.5);
//Since we made the screen shot be 8x bigger the pixels
//on an #2x device are in increments of 4
for (int x = 0; x < rect.size.width*8; x+=4)
{
//Draw a line at x to ctx
}
for (int y = 0; y < rect.size.height*8; y+=4)
{
//Draw a line at y to ctx
}
}
I am sorry that I don't have time to actually write and test this code myself, so there is probably a few little issues with it. But this should get you going in the right direction.
Also, since you are blowing up the image you actually don't need to change the scale with CGContextScaleCTM, you just need to draw your lines at the right intervals.

Related

Access iPhone Absolute Pixel Position

In the screenspace of an iPhone/iPad, Apple uses points, which are typically half the actual resolution of the screen. My question is, is it possible to access the actual pixels themselves? For example, if i take a UIView, and make it have a size/location of 0,0,0.5,0.5 with a background color of red, i can't see it on the screen.
Just wondering if this is possible.
Thanks!
Sure it's possible.
The code you already have should be working (a UIView with a size of (0.5, 0.5)). I just ran it and captured this result from the simulator:
Yea. That's difficult to see. Let's zoom that in.
So yes, you can draw on-screen in smaller values than a single point.
However, to draw a single pixel, you'll want to be using a point value that is 1/scaleOfScreen (as not all devices have 2x displays). So, for example, you'll want your code to look something like this:
CGFloat scale = [UIScreen mainScreen].scale;
CGFloat pixelPointWidth = 1/scale;
UIView* v = [[UIView alloc] initWithFrame:CGRectMake(20, 20, pixelPointWidth, pixelPointWidth)];
v.backgroundColor = [UIColor redColor];
[self.view addSubview:v];
This will now create a UIView that occupies a single pixel on-screen.
Although, if you want to be doing a lot of pixel-perfect drawing, you should probably be using something lower level than a single UIView (have a look at Core Graphics).
However.
You may encounter some issues with this method when drawing on an iPhone 6 Plus. Because it's screen's scale differs from its nativeScale, it will first render your content in the logical coordinate space of 3x and then downsample to the actual screen resolution (around 2.6x).
This will most probably result in some pixel bleeding, where your 'pixel' view can be rendered in neighboring pixels (although usually at a reduced brightness).
Unfortunately, there is no easy way around this problem without using an even lower level API such as OpenGL or Metal, where you can circumvent this automatic scaling and then downsampling, and draw directly into the screen's actual coordinate space.
Have a look here for a nice little overview on how different devices render content onto their screens.
Have a look here for more info on how pixel bleeding can occur on the iPhone 6 Plus.
You can guess the pixels based on the point depending on the device resolution (in ppi) by multiplying a coefficient but you don't want to do this.
Also, in your exemple you did not state that you normalized the coordinates so basically you are trying to display a red box at the first pixel (top left) with a size of half a point, which is why you can't see it.
EDIT
To draw a red box you can use this sample code :
// Draw a red box
[[UIColor redColor] set];
UIRectFill(CGRectMake(20, 20, 100, 100)); // position (x : 20, y: 20) (still top left) and size (100*100 points)

Final Cut pro X cropped output media

I've created simple 7 seconds clop which uses standard plugin for FCPX: "Bold Fin" title.
While i am editing this clip - everything fits to the screen:
also everything is ok when i am starts to export this clip to the master file:
but when actual file is ready - it seems like it is cropped:
Could somebody please help to find a reason why my output actually cropped? And how fix this issue?
Judging by the image you provided, you have non-square pixels in your footage or in projects settings (or footage's aspect ratio isn't 16:9). I print-screened the image inside FCP's canvas and found that you have rectangular pixels stretched along X axis, instead of square ones.
Seemingly, FCPX trying to compensate pixel aspect ratio for FullHD export (par = 1.0, ar = 16:9), stretched pixels along Y axis, which led to cropping.

PDF vector images in iOS. Why does having a smaller image result in jagged edges?

I want to use pdf vector images in my app, I don't totally understand how it works though. I understand that a PDF file can be resized to any size and it will retain quality. I have a very large PDF image (a cartoon/sticker for a chat app) and it looks perfectly smooth at a medium size on screen. If I start to go smaller though, say thumbnail size the black outline starts to look jagged. Why does this happen? I thought the images could be resized without quality loss. Any help would be appreciated.
Thanks
I had a similar issue when programatically changing the UIImageView's centre.
The result of this can lead to pixel misalignment of your view. I.e. the x or y of the frame's origin (or width or height of the frame's size) may lie on a non integral value, such as x = 10.5, where it will display correctly if x = 10.
Rendering views positioned a fraction into a full pixel will result with jagged lines, I think its related to aliasing.
Therefore wrap the CGRect of the frame with CGRectIntegral() to convert your frame's origin and size values to integers.
Example (Swift):
imageView?.frame = CGRectIntegral(CGRectMake(10, 10, 100, 100))
See the Apple documentation https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CGGeometry/#//apple_ref/c/func/CGRectIntegral

Issues with sprite location in cocos2d

I have a sprite, that I'd like to use as background image(using cocos2d).
CCSprite *abc =[CCSprite spriteWithImageNamed:#"background.png"];
abc.position = ccp(self.contentSize.width/2,self.contentSize.height/2);
First image is original, second one is a screenshot from the simulator. Resolution of the image is 640/1136. What should I do to fit all space of the screen normally? How schould I locate it?
The code you are using is correct.
The result you are getting is not what you expected because you probably loaded a Retina Image on a non Retina Screen.
Check out Cocos2d Naming conventions here for more info.
For the background image you chose of the sun, I am assuming that you want it to be always in the top right corner. (That makes more sense to me than centering it).
Now an elegant solution to accomplish this would be to get your image for the 4 inch screen that you have already created, and define a rule so that it's top left corner is always at the top left corner of the screen. For the 3.5 inch screens this would be clipped.
Now, first you want to define an anchor point as
_background.anchorPoint = ccp(1.0f, 1.0f);
This will tell Cocos to position your background relative to the top right corner.
Now you can go on and position it so that it is always at the top corner of the screen.
background.position = ccp(self.scene.bounds.size.width, self.scene.size.height);
This would be the standard and best way to do it. Results and benefits:
Works on 3.5 and 4 inch screens without needing specific image sizes
Simple, no unnecessary code and especially UI_INTERFACE_IDIOM testing
The way most everybody does it
Another way of positioning in the top right corner
You can also check out the new positionType property for CCNode in the reference. CCPositionUnitNormalized can help you define a positioning rule similar to saying position this to the 100% width and 100% height of the parent container. It would be something like this.
_background.positionType = CCPositionUnitNormalized;
_background.position = ccp (1.0f, 1.0f);
and have the same result if you prefer this syntax.
You can use either scaling the image to fit the device height or just use separate image for the 4 inch iPhone and 3.5 inch
For Scaling
abc.scaleY = winSize.height/abc.contentSize.height;
For Specific image
if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPhone)
{
if([[UIScreen mainScreen] bounds].size.height == 568)
{
// iPhone 5 add different image
}
else
{
// add 3.5 screen image
}
}
I am unsure what "self" is referencing, but I assume it is a layer. It appears you are trying to center the image on the screen, or you need to offset it based on the size of the screen. If so you should get the screen size as it will allow you to place the image properly no matter what the resolution of the screen is. I am using Cocos2d 2.1.
CGSize winSize = [CCDirector sharedDirector].winSize
This is the winSize in points. You can also get the winSize in pixels:
CGSize winSizeInPixels = [CCDirector sharedDirector].winSizeInPixels;
Use whatever works best for you. You can then center the image with the following, for example:
abc.position = ccp(winSize.width / 2, winSize.height / 2);
Regardless of whether or not you are trying to center the image, knowing the screen size will allow you to place the image based on that screen size so that it appears properly.
Obviously the size of the image and whether or not it fills the screen must be addressed.
I hope this helps.

Box2D coordinate system concepts: PTM_RATIO and retina display

Recently I published a question regarding this topic and I received a useful answer, but my experimentation points me in a different way that I don’t understamd.
From the answer is clear that we should use the same PTM_RATIO for retina and non-retina devices. However we may double it from iPhone to iPad if we want to show the same portion of the world. In my case I used 50 for iPhone and 100 for iPad because Box2d simulations works better if the bodies are between 0.1 and 10m and the main sprite is about 2m.
I used Physics Editor to build the fixtures using GB2ShapeCache without success for retina devices. Then I decided to feed Box2D coordinates directly and I reached strange conclusions that I would like to clarify.
I created a debug method (independent from any sprite) to draw a single line from 1/3 of screens height to 1/3 of screens wide.
- (void)debugGround
{
// iPad: 1024x768
// iPhone: 480x320
CGSize winSize = [CCDirector sharedDirector].winSize; // unit is points
b2EdgeShape groundShape;
b2FixtureDef groundFixtureDef;
groundFixtureDef.shape = &groundShape;
groundFixtureDef.density = 0.0;
b2Vec2 left = b2Vec2(0, winSize.height/3/PTM_RATIO);
b2Vec2 right = b2Vec2(winSize.width/3/PTM_RATIO, winSize.height/3/PTM_RATIO);
groundShape.Set(left, right);
groundBody->CreateFixture(&groundFixtureDef);
}
If Box2D takes coordinates in points and converts them dividing by PTM_RATIO, the result should be the same for iPhone and iPad retina and non retina.
The result for iPad non retina is as expected:
But for iPhone retina and iPad retina, the fixtures are doubled!!
The most obvious correction should be divide by 2, this means dividing by CC_CONTENT_SCALE_FACTOR.
I managed to make it work for all devices refactoring the code to:
- (void)debugGround
{
CGSize winSize = [CCDirector sharedDirector].winSize;
b2EdgeShape groundShape;
b2FixtureDef groundFixtureDef;
groundFixtureDef.shape = &groundShape;
groundFixtureDef.density = 0.0;
b2Vec2 left = b2Vec2(0, winSize.height/3/PTM_RATIO/CC_CONTENT_SCALE_FACTOR());
b2Vec2 right = b2Vec2(winSize.width/3/PTM_RATIO/CC_CONTENT_SCALE_FACTOR(), winSize.height/3/PTM_RATIO/CC_CONTENT_SCALE_FACTOR());
groundShape.Set(left, right);
groundBody->CreateFixture(&groundFixtureDef);
}
I also managed to display correctly the lower platforms dividing by the scale the vertex, the offsets and anywhere I use PTM_RATIO to convert to Box2D coordinates.
It is supposed I shouldn’t use CC_CONTENT_SCALE_FACTOR by any means to multiply positions because GL functions already take this into consideration.
Can anyone clarify this behavior? In which concepts I’m wrong?
I hope this helps the community to understand better Box2D coordinate system.
you misunderstood: GL functions (this includes ccDraw* functions!) require multiplication with content scale factor because GL works on pixel resolution, whereas UIKit views and cocos2d nodes use point coordinates.

Resources