Pixels of an image - image-processing

I have a stupid question:
I have a black circle on white background, something like:
I have a code in Matlab that gets an image with a black circle and returns the number of pixels in the circle.
will I get the same number of pixels in a camera of 5 mega pixel and a camera of 8 mega pixel?

The short answer is: Under most circumstances, No. 8MP should have more pixels than 5MP, However...
That depends on many factors related to the camera and the images that you take:
Focal length of the cameras, and other optics parameters. Consider a fish-eye lens to understand my point.
Distance of the circle from the camera. Obviously, closer objects appear larger.
What the camera does with the pixels from the sensor. For example, 5MP cameras that works in a down-scaled regime, outputting 3MP instead.

its depends on the Resolution is how many pixels you have counted horizontally or vertically when used to describe a stored image.
Higher mega pixel cameras offer the ability to print larger images.
For example a 6mp camera offers a resolution of 3000 x 2000 pixels. If you allow 300dpi (dots per inch) for print quality, this would give you a print of approx 10 in x 7 in. 3000 divided by 300 = 10, 2000 divided by 300 = approx 7
A 3.1mp camera offers a resolution of 2048 x 1536 pixels which gives a print size of 7in x 5in

Related

How does image digitalization differ from sound digitalization (PCM)?

I am trying to understand digitalization of sound and images.
As far as I know, they both need to convert analog signal to digital signal. Both should be using sampling and quantization.
Sound: We have amplitudes on axis y and time on axis x. What is on axis x and y during image digitalization?
What is kind of standard of sample rate for image digitalization? It is used 44kHz for CDs (sound digitalization). How exactly is used sample rate for images?
Quantization: Sound - we use bit-depth - which means levels of amplitude - Image: using bit-depth also, but it means how many intesities are we able to recognize? (is it true?)
What are other differences between sound and image digitalization?
Acquisition of images can be summarized as a spatial sampling and conversion/quantization steps. The spatial sampling on (x,y) is due to the pixel size. The data (on the third axis, z) is the number of electrons generated by photoelectric effect on the chip. These electrons are converted to ADU (analog digital unit) and then to bits. What is quantized is the light intensity in level of greys, for example data on 8 bits would give 2^8 = 256 levels of gray.
An image loses information both due to the spatial sampling (resolution) and the intensity quantization (levels of gray).
Unless you are talking about videos, images won't have sampling in units of Hz (1/time) but in 1/distance. What is important is to verify the Shannon-Nyquist theorem to avoid aliasing. The spatial frequencies you are able to get depend directly on the optical design. The pixel size must be chosen respectively to this design to avoid aliasing.
EDIT: On the example below I plotted a sine function (white/black stripes). On the left part the signal is correctly sampled, on the right it is undersampled by a factor of 4. It is the same signal, but due to bigger pixels (smaller sampling) you get aliasing of your data. Here the stripes are horizontal, but you also have the same effect for vertical ones.
There is no common standard for the spatial axis for image sampling. A 20 megapixel sensor or camera will produce images at a completely different spatial resolution in pixels per mm, or pixels per degree angle of view than a 2 megapixel sensor or camera. These images will typically be rescaled to yet another non-common-standard resolution for viewing (72 ppi, 300 ppi, "Retina", SD/HDTV, CCIR-601, "4k", etc.)
For audio, 48k is starting to become more common than 44.1ksps. (on iPhones, etc.)
("a nice thing about standards is that there are so many of them")
Amplitude scaling in raw format also has no single standard. When converted or requantized to storage format, 8-bit, 10-bit, and 12-bit quantizations are the most common for RGB color separations. (JPEG, PNG, etc. formats)
Channel formats are different between audio and image.
X, Y, where X is time and Y is amplitude is only good for mono audio. Stereo usually needs T,L,R for time, left, and right channels. Images are often in X,Y,R,G,B, or 5 dimensional tensors, where X,Y are spatial location coordinates, and RGB are color intensities at that location. The image intensities can be somewhat related (depending on gamma corrections, etc.) to the number of incident photons per shutter duration in certain visible EM frequency ranges per incident solid angle to some lens.
A low-pass filter for audio, and a Bayer filter for images, are commonly used to make the signal closer to bandlimited so it can be sampled with less aliasing noise/artifacts.

How to set resolution of image

I am using OpenCV to generate images with depth of 1 bit to cut in a laser cutter (Github repo here). I save them with:
cv2.imwrite(filepath, img, [cv2.IMWRITE_PNG_BILEVEL, 1])
Each pixel corresponds to 0.05mm (called "scan gap" in the laser cutter). A sample image has 300 x 306 pixels and appears in the laser cutter software (LaserCut Pro 5) with size 30 mm x 30 mm. This corresponds to a resolution of 254 pixels per inch and the uncommon value could be from the software. I want a size of 15 mm x 15.3 mm and want to set a higher resolution to achieve that. I could resize by hand, but if I make a mistake, the pixels are no longer exactly aligned with the scan gap of the laser, resulting in inaccuracies in the engraving.
Does OpenCV have a way to set the resolution or final size of the image?

Difference of a circle in picture with different resolutions by the same phone camera

I shoot a circle through the phone, and obtain the diameter of the circle (the number of pixels in the picture) by image processing. I found that the number of pixels of diameter is different at different resolutions.The following is the table about diameter at different resolution that I record through experiment. **I want to know how phone camera to get photos of different resolution sizes, and the relation between different resolutions.**I am searching for a long time on net. But no use.
The sensor in your camera has a certain size - it may be 29mmx19mm, or 24mm by 16mm (APS-C) or Micro Four Thirds (18mm by 13mm) or Full Frame (36mm by 24mm).
When the light goes through the lens it forms an image of your circle on the sensor and the sensor records it. When you change resolution, the camera uses a different number of pixels to record it but the circle still shows up as the same number of millimetres on the sensor because it is the lens's focal length and the distance to the object that determines the size of the image formed on the sensor.
If you divide the resolution by the diameter, you will see that your circle is forming a picture a constant size on your sensor - 6.25 units in size:
Let's try an example and pretend your camera is full frame. That means that at 640x480 resolution, 640 pixels is 36mm, so your 104 pixel wide circle means that the image formed on your sensor is
104 * 36mm
--- = 5.85mm
640
When you record at 4160 resolution, your 36mm is divided into 4160 pixels, so your 664 pixels make
664 * 36mm
---- = 5.7mm
4160
So basically, what you are seeing is that the size of the image on your sensor is independent of the resolution you record it at - which is correct since the size of the image on the sensor is determined by the focal length of your lens and the distance to the object.

Determining pixel coordinates across display resolutions

If a program displays a pixel at X,Y on a display with resolution A, can I precisely predict at what coordinates the same pixel will display at resolution B?
MORE INFORMATION
The 2 display resolutions are:
A-->1366 x 768
B-->1600 x 900
Dividing the max resolutions in each direction yields:
X-direction scaling factor = 1600/1366 = 1.171303075
Y-direction scaling factor = 900/768 = 1.171875
Say for example that the only red pixel on display A occurs at pixel (1,1). If I merely scale up using these factors, then on display B, that red pixel will be displayed at pixel (1.171303075, 1.171875). I'm not sure how to interpret that, as I'm used to thinking of pixels as integer values. It might help if I knew the exact geometry of pixel coordinates/placement on a screen. e.g., do pixel coordinates (1,1) mean that the center of the pixel is at (1,1)? Or a particular corner of the pixel is at (1,1)? I'm sure diagrams would assist in visualizing this--if anyone can post a link to helpful resources, I'd appreciate it. And finally, I may be approaching this all wrong.
Thanks in advance.
I think, your problem is related to the field of scaling/resampling images. Bitmap-, or raster images are digital photographs, so they are the most common form to represent natural images that are rich in detail. The term bitmap refers to how a given pattern (bits in a pixel) maps to a specific color. A bitmap images take the form of an array, where the value of each element, called a pixel picture element, correspond to the color of that region of the image.
Sampling
When measuring the value for a pixel, one takes the average color of an area around the location of the pixel. A simplistic model is sampling a square, and a more accurate measurement is to calculate a weighted Gaussian average. When perceiving a bitmap image the human eye should blend the pixel values together, recreating an illusion of the continuous image it represents.
Raster dimensions
The number of horizontal and vertical samples in the pixel grid is called raster dimensions, it is specified as width x height.
Resolution
Resolution is a measurement of sampling density, resolution of bitmap images give a relationship between pixel dimensions and physical dimensions. The most often used measurement is ppi, pixels per inch.
Scaling / Resampling
Image scaling is the name of the process when we need to create an image with different dimensions from what we have. A different name for scaling is resampling. When resampling algorithms try to reconstruct the original continuous image and create a new sample grid. There are two kind of scaling: up and down.
Scaling image down
The process of reducing the raster dimensions is called decimation, this can be done by averaging the values of source pixels contributing to each output pixel.
Scaling image up
When we increase the image size we actually want to create sample points between the original sample points in the original raster, this is done by interpolation the values in the sample grid, effectively guessing the values of the unknown pixels. This interpolation can be done by nearest-neighbor interpolation, bilinear interpolation, bicubic interpolation, etc. But the scaled up/down image must be also represented over discrete grid.

A 2D grid of 100 x 100 squares in OpenGL ES 2.0

I'd like to display a 2D grid of 100 x 100 squares. The size of each square is 10 pixels wide and filled with color. The color of any square may be updated at any time.
I'm new to OpenGL and wondered if I need to define the vertices for every square in the grid or is there another way? I want to use OpenGL directly rather than a framework like Cocos2D for this simple task.
You can probably get away with just rendering the positions of your squares as points with a size of 10. GL_POINT's always are a set number of pixels wide and high, so that will keep your squares 10 pixels always. If you render the squares as a quad you will have to make sure they are the right distance from the camera to be 10 pixels wide and high (also the aspect may affect it).

Resources