Pixels, abbreviated as "px", are also a unit of measurement commonly used in graphic and web design, equivalent to roughly 1⁄96 inch (0.26 mm). This measurement is used to make sure a given element will display as the same size no matter what screen resolution views it.
THIS IS WRITTEN IN WIKIPEDIA IS IS CORRECT SO LIKE LENGTH PIXEL HAVE UNIT AND DIMENSIONS
"Pixel" both refers to a standardized unit and the individual parts of a screen. A pixel(px) is a unit, which makes sure that objects on a page will be the same size no matter the screen. As for the screen pixel, the size of a pixel on your screen depends on the resolution of your screen and the image.
Related
The smallest QR-Codes to my knowledge are 25x25 blocks and thus if I want to detect the QR-Code in an image the part containing it needs to be at least 25x25 pixels.
Is there a smaller alternative (with a library implementation in python)?
The code should be able to encode at least 8 Bit of information.
No information encoding with different colors should be used, the code should only use black and white.
I have a game that is extremely procedural and because of this one of the textures in the games can end up covering the space of (113 x 10) to (682 x 768) on iPad retina. Furthermore it isn't always even in the same aspect ratio. Furthermore the unscaled texture has to be at an aspect ratio of about 2:11 making things awkward.
I can not figure out how Xcode perceives pdf vector files.
For example the first time I made my svg into a pdf and followed all the instructions for making a single vector slot it ended up being all pixelated/anti-aliased on some devices. And I know that .pdf vectors can be rasterized to any resolution without loosing quality! So I am pretty sure Xcode compiles vectors into bitmaps for the SKTexture. And as far as I can tell the amount of pixels that it rasterizes from the vector is not even variable on a device.
For example after that one I scaled up the svg to double the pts of its dimensions and exported it. Now on some devices the texture looked perfect yet on others it looked like a rasterized mess! And it didn't seem to happen because of one device resolution being bigger then the other.
So then I decided to scale the vector one more time so at this point it had 4x more points then the vector started as. Things worked wonderfully, until I tested on the iPhone 6 and instead of being beautiful the texture got blacked out, I am fairly certain this is because the texture was to big for Spritekit. There is no happy medium I can find.
My goal is to get this vector file looking great on every device regardless of how it is scaled.
I have looked around to see what SpriteKit uses for its pt to pixel conversions and I haven't found any numbers that check out. How I can make my texture work for this project? How does Spritekit decide how many pixels to rasterize a vector as when I make an SKTexture.
Edit:
Here are some hypothetical questions.
H1. How many points wide and tall would a pdf vector need to be for it to be able to be up to these dimensions on any device. width = width of device, height = height of device / 2. So basically if I had a vector I knew was going to occupy that space what should its dimensions (in points) be?
H2. Is the amount of pixels rasterized for every point directly proportionate to the amount of pixels on the display or is their more then just pixels on a screen?
I'm drawing planets in OpenGL ES, and running into some interesting performance issues. The general question is: how best to render "hugely detailed" textures on a sphere?
(the sphere is guaranteed; I'm interested in sphere-specific optimizations)
Base case:
Window is approx. 2048 x 1536 (e.g. iPad3)
Texture map for globe is 24,000 x 12,000 pixels (an area half the size of USA fits the full width of screen)
Globe is displayed at everything from zoomed in (USA fills screen) to zoomed out (whole globe visible)
I need a MINIMUM of 3 texture layers (1 for the planet surface, 1 for day/night differences, 1 for user-interface (hilighting different regions)
Some of the layers are animated (i.e. they have to load and drop their texture at runtime, rapidly)
Limitations:
top-end tablets are limited to 4096x4096 textures
top-end tablets are limited to 8 simultaneous texture units
Problems:
In total, it's naively 500 million pixels of texture data
Splitting into smaller textures doesn't work well because devices only have 8 units; with only a single texture layer, I could split into 8 texture units and all textures would be less than 4096x4096 - but that only allows a single layer
Rendering the layers as separate geometry works poorly because they need to be blended using fragment-shaders
...at the moment, the only idea I have that sounds viable is:
split the sphere into NxM "pieces of sphere" and render each one as separate geometry
use mipmaps to render low-res textures when zoomed out
...rely on simple culling to cut out most of them when zoomed in, and mipmapping to use small(er) textures when they can't be culled
...but it seems there ought to be an easier way / better options?
Seems that there are no way to fit such huge textures in memory of mobile GPU, even into the iPad 3 one.
So you have to stream texture data. The thing you need is called clipmap (popularized by id software with extended megatexture technology).
Please read about this here, there are links to docs describing technique: http://en.wikipedia.org/wiki/Clipmap
This is not easily done in ES, as there is no virtual texture extension (yet). You basically need to implement virtual texturing (some ES devices implement ARB_texture_array) and stream in the lowest resolution possible (view-dependent) for your sphere. That way, it is possible to do it all in a fragment shader, no geometry subdivision is required. See this presentation (and the paper) for details how this can be implemented.
If you do the math, it is simply impossible to stream 1 GB (24,000 x 12,000 pixels x 4 B) in real time. And it would be wasteful, too, as the user will never get to see it all at the same time.
I am trying to build my application with retina resolution (2048x1536) but using :
NSLog(#"resolution from xcode %f %f", [UIScreen mainScreen].bounds.size.width, [UIScreen mainScreen].bounds.size.height);
I always get 1024x768 resolution. any ideas how to setup retina resolution?
You're getting the value in points, not pixels.
From Apple's docs:
Points Versus Pixels
In iOS there is a distinction between the coordinates you specify in your drawing code and the pixels of the underlying device. When using native drawing technologies such as Quartz, UIKit, and Core Animation, you specify coordinate values using a logical coordinate space, which measures distances in points. This logical coordinate system is decoupled from the device coordinate space used by the system frameworks to manage the pixels on the screen. The system automatically maps points in the logical coordinate space to pixels in the device coordinate space, but this mapping is not always one-to-one. This behavior leads to an important fact that you should always remember:
One point does not necessarily correspond to one pixel on the screen.
The purpose of using points (and the logical coordinate system) is to provide a consistent size of output that is device independent. The actual size of a point is irrelevant. The goal of points is to provide a relatively consistent scale that you can use in your code to specify the size and position of views and rendered content. How points are actually mapped to pixels is a detail that is handled by the system frameworks. For example, on a device with a high-resolution screen, a line that is one point wide may actually result in a line that is two pixels wide on the screen. The result is that if you draw the same content on two similar devices, with only one of them having a high-resolution screen, the content appears to be about the same size on both devices.
Android and iOS have a concept of a "density independent pixel" so your layouts look the same on devices with different densities and screen sizes.
Up until now I've written code to manually space elements using pixels (i.e. I want this button to be 10 pixels from the left side of the screen). This is great on a Curve, but when I load it up on a Bold the resolution is much higher, so 10 pixels is a much smaller physical space.
What are the best practices for multiple screen sizes on BlackBerry? Is there any easy way to define a density independent pixel? RIM seems to not offer much in terms of documentation or APIs to make this easy.
Points are density independent pixels (to a good degree of accuracy).
For BlackBerry, the most relevant class is net.rim.device.api.ui.Ui which defines a UNITS_pt constant (and a UNITS_px constant), a convertSize method to convert between points and pixels (since operations on Graphics take pixels instead of points).
A useful methodology for BlackBerry apps is to take everything in relation to your font sizes, which you define in points - there's a version of net.rim.device.api.ui.Font.derive that takes a units parameter and makes it easy to get fonts with a particular point size.
Of course, you can't take anything for granted - defining things in points will make things easier, but with BlackBerry you deal with lots of different pixel densities and aspect ratios so test thoroughly, at least on the simulators.