Sub-pixel rendering of fonts on the iPad - ipad

Sub-pixel font rendering like ClearType dramatically improves font display resolution and improves screen readability. How would I program sub-pixel rendering of a font (in general), and how can this be achieved on the iPad (C, C++, or Objective-C on an iOS device)? Fonts are quite blurry at certain sizes on the iPad, and I know that the iPad's display would work well with this technique...
So, how would I develop a font rendering engine for the iPad (e.g. How do I even access sub-pixels? Do I use OpenGL? Is there an existing open-source font rendering engine written in C, C++, or Objective-C for Mac OS X?)?

Each pixel on the iPad is a rectangle of red, green, and blue components, so one might think that sub-pixel font rendering would be a good choice for the device.
But consider that this device can be easily changed from portrait to landscape modes, and applications are expected to respond to that change. This would imply that your sub-pixel font mechanism would also have to respond to that change, and you would need two separate sub-pixel descriptions for each font.
Now throw in the fact that developers expect to be able to write universal applications that run on the pad and the phones in a single purchase/download. But look at the different pixel configurations on the various generations of the phones in the image below. Each of those, recall, would need to describe fonts differently in portrait mode and landscape mode. Now you have an explosion of font descriptions.
Now recall that we're speaking of portable devices where the most precious resource is the battery, and sub-pixel font rendering is more computationally intensive.
I'm guessing that this is not too different from the thought process that led Apple to eschew sub-pixel font rendering in favor of hoping that display technology increases pixel density to the point where it is no longer necessary (the retina display on the iPhone 4 being the first step in that direction.)
I would wager that in some future edition of the iPad, we'll have a display with similar density, and it won't matter as much. Any effort that you invest trying to invent a sub-pixel font rendering mechanism for your iPad application will immediately become obviated at that point, so I would recommend not going down that path.

Related

How To Prepare Image Assets in iOS supporting both iPads and iPhones?

I usually do not care much about my assets even if I support the iPads in my project. As long as the imageView for the background of the app is set to Aspect Fill.
Also, here are some links I've found, but not so related to this question.
OLD Question and old answers: How to support both iPad and iPhone retina graphics in universal apps
Cool question and cool answers, however, question and answers focus merely in iPhones:
iOS: Preparing background images for applications
Going back to the question, if I have an Adobe XD file or Sketch, or Photoshop or whatever file that lets me export an image/asset, in what resolution should I start? Do I start with the largest possible size (for iPad Pro) which is 1024x1366 then let the software cut the sizes into #1x and #2x?
If I'm only to support the iPhones, then this would be way lots easier. Thank you!
If your source is vector based, then (obviously?) it's a non-issue...
With bitmap / raster images, you almost always get better results by scaling down.
Depending on the image itself (a photo tends to scale much better than a line-drawing), you may not be happy with simple "auto-gen" features... in which case, you'd need to manually "scale and tweak".
(Hope that helps).

Photoshop Font Pixel Size to Xcode Interface builder Font Point Size

How do I match the font pixel size given to me by my designer in PhotoShop to the correct font size in Xcode Interface builder.
For example, my designer is using Helvetica Neueu Regular 32px Font in his design.
I've used a few points to pixel translation sites, but it doesn't seem exact.
I have attempted to follow the answer from this question, but to no avail:
https://stackoverflow.com/a/6107836/1014164
You will never have perfect results when visually comparing a Photoshop comp to a real program. In fact, it's not un-common for a text layout to be different between different computers because version and operating system differences (as well as monitor layouts) cause the text to reflow every time it's edited.
Unless things are very much different in other versions of Photoshop, your designer hasn't specified 32px because Photoshop doesn't lay text out in pixels - it works in points/picas. The exact text rendering is also dependent on the document's resolution (which is different between print and screen).
The best you can do is get the text to look roughly proportional to the designer's intent. In modern iOS, most apps will use the user's customized font settings anyway.

How can I use GLKView to draw at a lower resolution?

I have an app based on "OpenGL Game" Xcode template, for everyday testing/dev I would like to render full screen but at lower resolution in simulator (e.g. 1/2 or 1/4). Any efficient/savvy way to put this in place?
(in case anybody wonder, I want to do this because my app is running very slow in simulator, so rendering at a smaller resolution would make testing/debugging a lot more programmer-friendly)
The contentScaleFactor of the view is 2.0 by default on a Retina display. If you reduce the scale factor, GLKView automatically uses a smaller framebuffer and scales its contents up to screen size for display.
Scale factor 1.0 is half size (or the same number of pixels as a non-Retina screen). 0.5 would be quarter size — big, chunky pixels on any display. Non-integral scale factors between 1.0 and 2.0 work, too, and can be a great way to compromise between quality and performance on a Retina display.
Update: just had a vote remind me of this answer. I'm surprised I wouldn't have said this originally, but I may as well add it now...
The iOS Simulator runs OpenGL ES using a software renderer (i.e. on the host Mac's CPU, not its GPU). That's why it's slow. Performance characteristics and rendering results can vary from renderer to renderer, so never trust the simulator for anything OpenGL/GPU-related (except perhaps in very broad strokes).

Which screen resolution should i use?

I'm planning on making my first game in xna (simple 2d game) and i wonder which screen resolution that would be appropriate to target the game against.
Resolution for a 2D game is a difficult issue.
Some people ignore it. World of Goo (for PC), for one very famous example, simply always runs at 800x600 on the PC, no matter what. And look how successful it was.
It helps to think about what kind of device you will be targeting. Here are some common resolutions and the devices they apply to:
1280x720 (720p, Xbox 360 "safe" resolution - free hardware scaling, works everywhere)
1920x1080 (1080p, Xbox 360 maximum resolution - can't auto-scale to all resolutions)
800x480 (Windows Phone 7)
1024x768 (iPad)
480x320 (iPhone 3GS and earlier)
960x640 (iPhone 4 retina display)
Android devices also have similar resolutions to WP7 and iOS devices.
(Note that consoles require you to render important elements inside a "title-safe" area or "action-safe" area. Typically 80% and 90% of the full resolution.)
Here is the Valve Hardware Survey, which you can see lists the common PC resolutions (under "Primary Display Resolution").
Targeting 800x480 for a mobile game, or 1280x720 for a desktop/console game, is a good place to start.
If you do want to support multiple resolutions, it is important to think about aspect ratio. Here is an excellent question that lists off some options. Basically your options are letter/pillar-boxing or bleeding (allowing for extra rendering outside "standard" screen bounds - like a title-safe area), or some combination of the two.
If your graphics need to be "pixel perfect" and simply scaling them won't work, then I would recommend targeting a series of base resolutions, and then boxing/bleeding to cover any excess screen on a particular device. When I do this, I usually provide assets for these target screen heights: 320, 480, 640, 720, 1080. Note that providing 5 versions of each asset is a huge amount of work - so try to use scaling wherever possible.
Many choices about resolution handling will depend on what style of game you are making. For example: whether you try to match a horizontal or vertical screen size will depend largely on what direction your game will mostly scroll in.
When I first started with c++ graphics I used 320*240, or 800*600 when I had to use larger images. But it's really up to you, whatever you prefer. As long as you don't use stupid values like 123*549 or something.
'normal' resolutions include but are not limited to:
160*120
320*240
640*480 (probably the most common)
800*600
1024*768

Bada scaling question

I'm developing a reader application for Bada and have a silly question.
Does a smooth way to convert pt size to pixel size exist?
I found something like this, but I'm still hoping you could apply some formula and be happy with it.
Points are a "real-world" length unit (they are generally defined as 1/72 in), but pixels do not have a definite real world size, since this depends on the resolution of the device.
For example, the pixels on my screen are about 0.3 mm wide, while the ones of my phone are about 0.15 mm, and the "pixels" of my laser printer are 0.02 mm wide. Thus, to go from pixels to real world units, you need the resolution of the specific device, i.e. the pixels/real world unit ratio, which, most often, is expressed in DPI (dots per inch, where "dot" is intended as "pixel" for devices that work with pixels).
When dealing with printing/scanning devices the "real world size" is important, so it's almost always provided by the OS in some way and is correct; on the other hand, with screens the situation is quite different.
In most situations you don't really care about the "real world size" of stuff displayed on screens, since no one is ever really measuring anything on the screen. Also, onscreen layouts are often partly done in pixels for a variety of reasons (simplicity being the first).
On the other way, text and other elements' sizes are often specified in points, twips and other "real world units", and in general good window layouts should be done in "real world units" to be easily adapted to screens with high pixel densities, where pixel-based layouts would be unreadable.
For this reason, the OS usually provides a DPI value for the screen, but in general it's left to the same default value (usually 72 DPI) regardless of the real attached screen (also to avoid breaking badly designed interfaces), but leaving it configurable to the user, to let him adjust it to a comfortable value.
As for Bada, I read here that the OS does not provide neither a real neither a "fake" DPI value, so there's no real way to convert from points to pixels. On the other hand, you could simply use the usual "default" 72 DPI value for your conversions. Notice that 72 DPI wasn't chosen by chance: since there are 72 pixels per inch, and 72 points per inch, you simply assume that a point is equal to a pixel. Not correct, but in your case "good enough".
Assuming 72 DPI for bada is not a good choice since modern mobile devices have DPIs around 200-300.
Unfortunately, Bada wanted to go iPhone way, and have a few devices and you would release your application on each device, which has fixed features. This way, you can visit samsung web, read real size of their screen. Then compute real DPI yourself for each device and store it to table. Runtime, you can get device name and try to search your table.
AFAIK you have to upload your application to bada shop for each device separately. And they assume from Bada SDK that you will compile for each target different application. Target is specified by screen resolution, and I guess you can expect real screen to be this size.
Well, i think think this design is stupid, but might be really the way how they expect you to develop for their platform.

Resources