I am animating something's position on the screen in xcode.
Currently it moves at "1" pixel every .1 seconds.
This means it's not moving at 1 pixel every .1 seconds on a retina display but 2 pixels every .1 seconds.
I want it to move at true 1 pixel every .1 seconds on a retina display. Is there any way to do this?
Any way to set an objects location to be a retina location or something?
You want to move in pixels. All coordinates in iOS are given in points. So you need to convert your points to pixels. This can be done by doing:
CGFloat screenScale = [UIScreen mainScreen].scale;
CGFloat ratio = 1.0 / screenScale;
Use ratio to increment your animation.
On a non-retina device, ratio will be 1 point. On current retina devices, ratio will be 0.5 point.
As you animate, move your x and y coordinates by ratio points and you will get one pixel of movement each time.
Starting in iOS 4, dimensions are measured in “points” instead of pixels. In non-Retina screens a point is one pixel, and in Retina screens a point is two pixels, draw a one-point line and it shows up two pixels wide.
Therefore, when on Retina screens you can move 0.5 points (which will equal 1 pixel).
Have a look at Apple's drawing concepts.
Related
I am tracking the color of a pixel at X & Y at a resolution of 1920 by 1080, I am simply wondering if there is any mathematically way to remain accurate in tracking the same pixel throughout various resolutions.
The pixel is not moving and is static, however I am aware that changing resolutions affects scaling and the X & Y system of the monitor.
So any suggestions would be great!
As always the whole screen area is filled, the same location on that physical screen (determined as the ratio of xLocation divided by the xWidth and the yLocation divided by the yHeight, this in centimeters or inches) will also always be at the same ratio of xPixelindex divided by xTotalpixels and yPixelindex divided by yTotalpixels.
Lets assume you have xRefererence and yReference of the target pixel, in a resolution WidthReference and HeightReference in which these coordinates mark the desired pixel.
Lets assume you have WidthCurrent and HeightCurrent of your screen size in pixels, for the resolurion in which you want to target a pixel at the same physical location.
Lets assume that you need to determine xCurrent and yCurrent as the coordinates for the pixel in the current resolution.
Then calculate the current coordinates as:
xCurrent = (1.0 * WidthCurrent) / WidthReference * xReference;
yCurrent = (1.0 * HeightCurrent)/ HeightReference * yReference;
I have a image of a human body. I have two reference points which are the left and right waist locations. Let's say for example: (100,100) and (200,100) are the respective left and right waist locations.
In addition to those two points, I also know the "real life" inches value of the waist.
I am trying to take those three data points and extrapolate how many pixels = one inch in "real life". This shouldn't be that hard, but I'm having some type of brain block on this.
Looking for the simple formula. The one I started with is:
(RightPoint.X - LeftPoint.X) / 34"
This does not work. The smaller the waist gets, the larger the pixels per inch value. In the above, it would be 2.9 pixels == 1".
If I change the 34" to 10", it shoots up to 10 pixels == 1". Or maybe that's correct? Ugh...brain where are you tonight!?!?
I'm looking for the correct formula that based on those three referential data points will allow me to determine how many pixels in the image == 1". So if I know in real life that the person's waist is 34 inches, I want to determine that in the image...let's say 2.5 pixels == 1 inch relative to the picture.
Unfortunately you don't have enough information to work it out. Firstly, the waist measurement in real life is 3-D and goes all the way around the body, so for a start you would have to divide it by 2 to allow for the tape-measure going both across the front and the back of the body - so your 34" waist would mean your 100 pixels correspond to 17" - if the body was flat - which it isn't! And that is the problem.
Imagine the person had two thick pillows down the front of their trousers... that would affect their waist measurement (make it miles bigger) but, as they are down the FRONT of their trousers, it wouldn't affect the pixel width.
Sorry, you can't do it accurately. You could assume their waist was perfectly circular, then the 100 pixels would correspond to the projection of their waist, so 34" would be the circumference of the waist, which is pi x d. So you would say that 100 pixels = 34/pi or around 11".
So, in concrete terms:
34/pi inches = 100 pixels
10.8 inches = 100 pixels
1 inch = 100/10.8 pixels
1 inch = 9.25 pixels
But remember this is an approximation based on the waist being circular.
I'd like to display a 2D grid of 100 x 100 squares. The size of each square is 10 pixels wide and filled with color. The color of any square may be updated at any time.
I'm new to OpenGL and wondered if I need to define the vertices for every square in the grid or is there another way? I want to use OpenGL directly rather than a framework like Cocos2D for this simple task.
You can probably get away with just rendering the positions of your squares as points with a size of 10. GL_POINT's always are a set number of pixels wide and high, so that will keep your squares 10 pixels always. If you render the squares as a quad you will have to make sure they are the right distance from the camera to be 10 pixels wide and high (also the aspect may affect it).
I have a stupid question:
I have a black circle on white background, something like:
I have a code in Matlab that gets an image with a black circle and returns the number of pixels in the circle.
will I get the same number of pixels in a camera of 5 mega pixel and a camera of 8 mega pixel?
The short answer is: Under most circumstances, No. 8MP should have more pixels than 5MP, However...
That depends on many factors related to the camera and the images that you take:
Focal length of the cameras, and other optics parameters. Consider a fish-eye lens to understand my point.
Distance of the circle from the camera. Obviously, closer objects appear larger.
What the camera does with the pixels from the sensor. For example, 5MP cameras that works in a down-scaled regime, outputting 3MP instead.
its depends on the Resolution is how many pixels you have counted horizontally or vertically when used to describe a stored image.
Higher mega pixel cameras offer the ability to print larger images.
For example a 6mp camera offers a resolution of 3000 x 2000 pixels. If you allow 300dpi (dots per inch) for print quality, this would give you a print of approx 10 in x 7 in. 3000 divided by 300 = 10, 2000 divided by 300 = approx 7
A 3.1mp camera offers a resolution of 2048 x 1536 pixels which gives a print size of 7in x 5in
I'm having some strange experience with Cocos2D.
I can't seem to draw/plot a point at x=0 or y = 0.
I have to move it inside the screen by one coordinate to be visible.
It's like it's cut off or something, I don't really understand.
I want to do some pixel plotting so it's rather important, I'm thinking I might need to use coregraphics instead...
Cocos2D renders 1 pixel at point 0,0 just fine. There's probably one or two things causing this issue for you:
You might be looking at a Retina display. Cocos2D scales content by 2x by default for Retina displays, so "one pixel" is actually 4 tiny retina pixels. Telling cocos2d to draw a single pixel at 0,0 on a Retina actually draws pixels at 0,0, -1,0, -1,-1 and 0,-1. (The last three are going to be offscreen.)
You may have shifted or scaled your parent CCNode(s) in such a way that 0,0 is actually considered offscreen.
I'm guessing its #1. A single pixel in retina is difficult to see so you probably want to stick with the 2x scaling. Just offset your parent CCNode by one point for Retina displays which would allow you to start plotting at 0,0 in that local coordinate system without having to worry about any offsets while you plot.