Bada scaling question - font-size

I'm developing a reader application for Bada and have a silly question.
Does a smooth way to convert pt size to pixel size exist?
I found something like this, but I'm still hoping you could apply some formula and be happy with it.

Points are a "real-world" length unit (they are generally defined as 1/72 in), but pixels do not have a definite real world size, since this depends on the resolution of the device.
For example, the pixels on my screen are about 0.3 mm wide, while the ones of my phone are about 0.15 mm, and the "pixels" of my laser printer are 0.02 mm wide. Thus, to go from pixels to real world units, you need the resolution of the specific device, i.e. the pixels/real world unit ratio, which, most often, is expressed in DPI (dots per inch, where "dot" is intended as "pixel" for devices that work with pixels).
When dealing with printing/scanning devices the "real world size" is important, so it's almost always provided by the OS in some way and is correct; on the other hand, with screens the situation is quite different.
In most situations you don't really care about the "real world size" of stuff displayed on screens, since no one is ever really measuring anything on the screen. Also, onscreen layouts are often partly done in pixels for a variety of reasons (simplicity being the first).
On the other way, text and other elements' sizes are often specified in points, twips and other "real world units", and in general good window layouts should be done in "real world units" to be easily adapted to screens with high pixel densities, where pixel-based layouts would be unreadable.
For this reason, the OS usually provides a DPI value for the screen, but in general it's left to the same default value (usually 72 DPI) regardless of the real attached screen (also to avoid breaking badly designed interfaces), but leaving it configurable to the user, to let him adjust it to a comfortable value.
As for Bada, I read here that the OS does not provide neither a real neither a "fake" DPI value, so there's no real way to convert from points to pixels. On the other hand, you could simply use the usual "default" 72 DPI value for your conversions. Notice that 72 DPI wasn't chosen by chance: since there are 72 pixels per inch, and 72 points per inch, you simply assume that a point is equal to a pixel. Not correct, but in your case "good enough".

Assuming 72 DPI for bada is not a good choice since modern mobile devices have DPIs around 200-300.

Unfortunately, Bada wanted to go iPhone way, and have a few devices and you would release your application on each device, which has fixed features. This way, you can visit samsung web, read real size of their screen. Then compute real DPI yourself for each device and store it to table. Runtime, you can get device name and try to search your table.
AFAIK you have to upload your application to bada shop for each device separately. And they assume from Bada SDK that you will compile for each target different application. Target is specified by screen resolution, and I guess you can expect real screen to be this size.
Well, i think think this design is stupid, but might be really the way how they expect you to develop for their platform.

Related

How can I use GLKView to draw at a lower resolution?

I have an app based on "OpenGL Game" Xcode template, for everyday testing/dev I would like to render full screen but at lower resolution in simulator (e.g. 1/2 or 1/4). Any efficient/savvy way to put this in place?
(in case anybody wonder, I want to do this because my app is running very slow in simulator, so rendering at a smaller resolution would make testing/debugging a lot more programmer-friendly)
The contentScaleFactor of the view is 2.0 by default on a Retina display. If you reduce the scale factor, GLKView automatically uses a smaller framebuffer and scales its contents up to screen size for display.
Scale factor 1.0 is half size (or the same number of pixels as a non-Retina screen). 0.5 would be quarter size — big, chunky pixels on any display. Non-integral scale factors between 1.0 and 2.0 work, too, and can be a great way to compromise between quality and performance on a Retina display.
Update: just had a vote remind me of this answer. I'm surprised I wouldn't have said this originally, but I may as well add it now...
The iOS Simulator runs OpenGL ES using a software renderer (i.e. on the host Mac's CPU, not its GPU). That's why it's slow. Performance characteristics and rendering results can vary from renderer to renderer, so never trust the simulator for anything OpenGL/GPU-related (except perhaps in very broad strokes).

Define and calculate Apples definition of the unit point

Apple writes in the user interface guidelines that a tappable element should be at least 44 x 44 points. They provide an example of the calculator app for reference. When I measured the tappable buttons I got the height of 10 mm/38 px. Using tools I found online I converted this to approximately 29 postscript points.
My question is what does Apple mean by 44 points? How do I convert this correctly?
Apple's use of 'point' is not the same as a typographical point. It's an abstract unit that doesn't have a fixed correlation to pixels, nor to any physical measurement (e.g., millimeters).
The first part of that is easy to see; a point width on a retina iPhone covers two pixels, twice as many as on a non-retina iPhone.
The iPad Mini makes the second part of that first statement pretty clear. From an iOS developer's point of view, 44 points is the same measurement on an iPad 2 as it is on the mini. But to a user, those represent very different physical (millimeter) sizes.
There's a bit more written under Points vs Pixels in their drawing guide.
It does seem a bit strange for the HIG to prescribe sizes that don't correspond to, well, humans. In practice, for a UX designer, your approximations are probably fine.
A point is 1/72 of an inch, so 44 points is 0.6 inch or 1.5 cm. I don't think Apple can change this definition all by themselves.

Printers' points vs. iOS points

When programming for the iPad, font (and other) sizes are specified in "points." I have seen reference to a point as a pixel that is independent of screen resolution. But I am having trouble finding definite confirmation of how big a point is in real terms (that is, in terms of inches). Is a point equal to one pixel on the standard iPad screen, so 1pt = 1/132in? And then, to confirm, this means that an "iOS point" is a different unit than the printer's point = 1/72in?
Thanks.
See here and here (scroll down to points vs. pixels) for the official word. Basically, a point is one pixel on a non-retina device (so the size varies between the iPad and the iPhone - it isn't related to a printer's point) and 2 pixels on a retina device (which has twice the number of pixels in each direction).
Drawing and positioning is done in points to allow the same code to run on both types of device - the frameworks will fill in the gaps to make drawing smoother on retina devices.
An iPad point is different to an iPhone point, which is different to a printers point, to answer your question.

Which screen resolution should i use?

I'm planning on making my first game in xna (simple 2d game) and i wonder which screen resolution that would be appropriate to target the game against.
Resolution for a 2D game is a difficult issue.
Some people ignore it. World of Goo (for PC), for one very famous example, simply always runs at 800x600 on the PC, no matter what. And look how successful it was.
It helps to think about what kind of device you will be targeting. Here are some common resolutions and the devices they apply to:
1280x720 (720p, Xbox 360 "safe" resolution - free hardware scaling, works everywhere)
1920x1080 (1080p, Xbox 360 maximum resolution - can't auto-scale to all resolutions)
800x480 (Windows Phone 7)
1024x768 (iPad)
480x320 (iPhone 3GS and earlier)
960x640 (iPhone 4 retina display)
Android devices also have similar resolutions to WP7 and iOS devices.
(Note that consoles require you to render important elements inside a "title-safe" area or "action-safe" area. Typically 80% and 90% of the full resolution.)
Here is the Valve Hardware Survey, which you can see lists the common PC resolutions (under "Primary Display Resolution").
Targeting 800x480 for a mobile game, or 1280x720 for a desktop/console game, is a good place to start.
If you do want to support multiple resolutions, it is important to think about aspect ratio. Here is an excellent question that lists off some options. Basically your options are letter/pillar-boxing or bleeding (allowing for extra rendering outside "standard" screen bounds - like a title-safe area), or some combination of the two.
If your graphics need to be "pixel perfect" and simply scaling them won't work, then I would recommend targeting a series of base resolutions, and then boxing/bleeding to cover any excess screen on a particular device. When I do this, I usually provide assets for these target screen heights: 320, 480, 640, 720, 1080. Note that providing 5 versions of each asset is a huge amount of work - so try to use scaling wherever possible.
Many choices about resolution handling will depend on what style of game you are making. For example: whether you try to match a horizontal or vertical screen size will depend largely on what direction your game will mostly scroll in.
When I first started with c++ graphics I used 320*240, or 800*600 when I had to use larger images. But it's really up to you, whatever you prefer. As long as you don't use stupid values like 123*549 or something.
'normal' resolutions include but are not limited to:
160*120
320*240
640*480 (probably the most common)
800*600
1024*768

Sub-pixel rendering of fonts on the iPad

Sub-pixel font rendering like ClearType dramatically improves font display resolution and improves screen readability. How would I program sub-pixel rendering of a font (in general), and how can this be achieved on the iPad (C, C++, or Objective-C on an iOS device)? Fonts are quite blurry at certain sizes on the iPad, and I know that the iPad's display would work well with this technique...
So, how would I develop a font rendering engine for the iPad (e.g. How do I even access sub-pixels? Do I use OpenGL? Is there an existing open-source font rendering engine written in C, C++, or Objective-C for Mac OS X?)?
Each pixel on the iPad is a rectangle of red, green, and blue components, so one might think that sub-pixel font rendering would be a good choice for the device.
But consider that this device can be easily changed from portrait to landscape modes, and applications are expected to respond to that change. This would imply that your sub-pixel font mechanism would also have to respond to that change, and you would need two separate sub-pixel descriptions for each font.
Now throw in the fact that developers expect to be able to write universal applications that run on the pad and the phones in a single purchase/download. But look at the different pixel configurations on the various generations of the phones in the image below. Each of those, recall, would need to describe fonts differently in portrait mode and landscape mode. Now you have an explosion of font descriptions.
Now recall that we're speaking of portable devices where the most precious resource is the battery, and sub-pixel font rendering is more computationally intensive.
I'm guessing that this is not too different from the thought process that led Apple to eschew sub-pixel font rendering in favor of hoping that display technology increases pixel density to the point where it is no longer necessary (the retina display on the iPhone 4 being the first step in that direction.)
I would wager that in some future edition of the iPad, we'll have a display with similar density, and it won't matter as much. Any effort that you invest trying to invent a sub-pixel font rendering mechanism for your iPad application will immediately become obviated at that point, so I would recommend not going down that path.

Resources