Calculating Difference In Rotation Between Two Images of Different Quality - opencv

The goal is replace all low resolution images by referring to a repository of high resolution images.
I was able to replace the images, but I noticed that the images were rotated and I also need to reflect changed in the images that I am adding. Also, there is no pattern for changing the rotation of the images. The rotation of the image was correct manually and no records were made for almost 50% of the Images.
I was unable to find a way to calculate the rotation since the images were of different quality (same WIDTHxHEIGHT, but different file size)
The following is one of the cases that need to be resolved:
Original Low Quality Image
Added High Quality Image

Like phoenixstudio said, first downsample the images to the same size. With OpenCV, you can do this with the resize function.
Then compare rotations of the images. Beware that even if the images came from the same high resolution source, it is unlikely that they will be bit-identical for the correct rotation. Different downsampling method or distortion from lossy compression could create a minor difference in pixel values. So compare with a tolerance like mean((A - B)^2) < tol.
Another thought: If these are JPEG images, there might be a rotation field in the EXIF metadata, which might help: see https://jdhao.github.io/2019/07/31/image_rotation_exif_info/

Related

Does scale up or down images effect image information?

i'm work on graduation project for image forgery detection using CNN , Most of the paper i read before feed the data set to the network they Down scale the image size, i want to know how Does this process effect image information ?
Images are resized/rescaled to a specific size for a few reasons:
(1) It allows the user to set the input size to their network. When designing a CNN you need to know the shape (dimensions) of your data at each step; so, having a static input size is an easy way to make sure your network gets data of the shape it was designed to take.
(2) Using a full resolution image as the input to the network is very inefficient (super slow to compute).
(3) For most cases the features desired to be extracted/learned from an image are also present when downsampling the image. So in a way resizing an image to a smaller size will denoise the image, filtering out much of the unimportant features within the image for you.
Well you change the images size. Of course it changes it's information.
You cannot reduce image size without omitting information. Simple case: Throw away every second pixel to scale image to 50%.
Scaling up adds new pixels. In its simplest form you duplicate pixels, creating redundant information.
More complex solutions create new pixels (less or more) by averaging neighbouring pixels or interpolating between them.
Scaling up is reversible. It doesn't create nor destroy information.
Scaling down divides the amount of information by the square of the downscaling factor*. Upscaling after downscaling results in a blurred image.
(*This is true in a first approximation. If the image doesn't have high frequencies, they are not lost, hence no loss of information.)

is it possible to take low resolution image from street camera, increase it and see image details

I would like to know if it is possible to take low resolution image from street camera, increase it
and see image details (for example a face, or car plate number). Is there any software that is able to do it?
Thank you.
example of image: http://imgur.com/9Jv7Wid
Possible? Yes. In existence? not to my knowledge.
What you are referring to is called super-resolution. The way it works, in theory, is that you combine multiple low resolution images, and then combine them to create a high-resolution image.
The way this works is that you essentially map each image onto all the others to form a stack, where the target portion of the image is all the same. This gets extremely complicated extremely fast as any distortion (e.g. movement of the target) will cause the images to differ dramatically, on the pixel level.
But, let's you have the images stacked and have removed the non-relevant pixels from the stack of images. You are left hopefully with a movie/stack of images that all show the exact same image, but with sub-pixel distortions. A sub-pixel distortion simply means that the target has moved somewhere inside the pixel, or has moved partially into the neighboring pixel.
You can't measure if the target has moved within the pixel, but you can detect if the target has moved partially into a neighboring pixel. You can do this by knowing that the target is going to give off X amount of photons, so if you see 1/4 of the photons in one pixel and 3/4 of the photons in the neighboring pixel you know it's approximate location, which is 3/4 in one pixel and 1/4 in the other. You then construct an image that has a resolution of these sub-pixels and place these sub-pixels in their proper place.
All of this gets very computationally intensive, and sometimes the images are just too low-resolution and have too much distortion from image to image to even create a meaningful stack of images. I did read a paper about a lab in a university being able to create high-resolution images form low-resolution images, but it was a very very tightly controlled experiment, where they moved the target precisely X amount from image to image and had a very precise camera (probably scientific grade, which is far more sensitive than any commercial grade security camera).
In essence to do this in the real world reliably you need to set up cameras in a very precise way and they need to be very accurate in a particular way, which is going to be expensive, so you are better off just putting in a better camera than relying on this very imprecise technique.
Actually it is possible to do super-resolution (SR) out of even a single low-resolution (LR) image! So you don't have to hassle taking many LR images with sub-pixel shifts to achieve that. The intuition behind such techniques is that natural scenes are full of many repettitive patterns that can be use to enahance the frequency content of similar patches (e.g. you can implement dictionary learning in your SR reconstruction technique to generate the high-resolution version). Sure the enhancment may not be as good as using many LR images but such technique is simpler and more practicle.
Photoshop would be your best bet. But know that you cannot reliably inclrease the size of an image without making the quality even worse.

Best way to store motion changes to reduce memory

I am comparing jpeg to jpeg in a constant 'video-stream'. i am using EMGU/OpenCV to compare each pixels at the byte level. There are 3 channels to each image (RGB). I had heard that it is common practice to store only the pixels that have changed between frames as a way of conserving memory space. But, if for instance/example I say EVERY pixel has changed (pls note i am using an exaggerated example to make my point and i would normally discard such large changes) then the resultant bytes saved is 3 times larger than the original jpeg.
How can I store such motion changes efficiently?
thanks
While taking the consecutive images the camera might also move or not. If the camera is fixed, only the items on the view move and some portion of the image changes every time. If the camera also moves, even if the objects stand still, the image changes significantly. There are some algorithms to discard the effect of the motion of the camera. So the main idea is when compared with the sampling frequency of the camera (e.g. 25 frames per second) most of the objects nearly standing still.
Because most of the image is unchanged between the frames, it becomes feasible to use difference of the images. It provides some compression ratios. However after some amount of time the newly received image shows big difference with the reference image, so it becomes better to get a new image reference. Which is named a "reference frame".
In fact, modern video compression algorithms uses advanced techniques to detect the objects and follow them, which results better compression ratios.
Wikipedia - Different compression techniques
Check This - OpenCV should handle the storing of consecutive images in different video formats.

resize image without losing quality with Gimp

i have a bunch of images which are way too big i need to decrease their size from 30 kb to 10 or 5 kb without loosing quality. I tried to change the dpi and pixels with no succeed. The images got blurred, and as they have text i can't read anything after the changes. Is there anyway i can accomplish this without loosing quality? I have almost a dozen images in my application.
Thanks in advance and have a nice day.
for batch resizing I use IrfanView (despite it's "lite-ness" it's very powerful).
It has a nice batch dialog, with a lot of options.
If you're working with png files try using better compression, and/or different color depth settings (if you're not using transparency you could try converting them to jpeg, although you might lose some quality)
changing color depth/range/compression might not affect image quality (not visibile anyway, if used with moderation) and it will decrease the size of the picture - in most of the cases anyway
if you want to stick to Gimp (I never personally used it), it should have some export features where you can select some settings for the image, like format and options
You cannot leave out data without reducing quality. Data has meaning.
You may try to use improved compression, pngcrush is the tool that automatically tries several approaches for you and picks the best.
Reducing colour depth will reduce the file size (while reducing colour quality). You can also turn on dithering in some image editors, but that's another loss in quality.
If your image has photographic content rather than graphical, convert to JPEG and use the JPEG quality settings, experiment with them a bit.
It seems that if I have a large png of 2500px wide and I want to resize it down to 100px wide, If I scale the image all at once the the desired size the image becomes way to distorted to use.
However If I scale the image in small increments of 200 pixels and repeat until you reach the desired length the image does not get as distorted. So if Im at 2500px then I would scale the image to 2300px then to 2100 and so on. The smaller the scale the less distortion.
Any resize method will have some loss, no matter how small. Following steps will make you lose quality.
steps for a single layer
layer->scale layer
image->scale image
image->fit canvas to layer
file->export as
steps for multiple layers
layer->new layer group
move all layers to layer group
select layer group
layer->scale layer
image->scale image
image->fit canvas to layer
file->export as

Image Comparison

What is the efficient way to compare two images in visual c..?
Also in which format images has to be stored.(bmp, gif , jpeg.....)?
Please provide some suggestions
If the images you are trying to compare have distinctive characteristics that you are trying to differentiate then PCA is an excellent way to go. The question of what format of the file you need is irrelevant really; you need to load it into the program as an array of numbers and do analysis.
Your question opens a can of worms in terms of complexity.
If you want to compare two images to check if they are the same, then you need to perform an md5 on the file (removing possible metainfos which could distort your result).
If you want to compare if they look the same, then it's a completely different story altogether. "Look the same" is intended in a very loose meaning (e.g. they are exactly the same image but stored with two different file formats). For this, you need advanced algorithms, which will give you a probability for two images to be the same. Not being an expert in the field, I would perform the following "invented out of my head" algorithm:
take an arbitrary set of pixel points from the image.
for each pixel "grow" a polygon out of the surrounding pixels which are near in color (according to HSV colorspace)
do the same for the other image
for each polygon of one image, check the geometrical similitude with all the other polygons in the other image, and pick the highest value. Divide this value by the area of the polygon (to normalize).
create a vector out of the highest values obtained
the higher is the norm of this vector, the higher is the chance that the two images are the same.
This algorithm should be insensitive to color drift and image rotation. Maybe also scaling (you normalize against the area). But I restate: not an expert, there's probably much better, and it could make kittens cry.
I did something similar to detect movement from a MJPEG stream and record images only when movement occurs.
For each decoded image, I compared to the previous using the following method.
Resize the image to effectively thumbnail size (I resized fairly hi-res images down by a factor of ten
Compare the brightness of each pixel to the previous image and flag if it is much lighter or darker (threshold value 1)
Once you've done that for each pixel, you can use the count of different pixels to determine whether the image is the same or different (threshold value 2)
Then it was just a matter of tuning the two threshold values.
I did the comparisons using System.Drawing.Bitmap, but as my source images were jpg, there were some artifacting.
It's a nice simple way to compare images for differences if you're going to roll it yourself.
If you want to determine if 2 images are the same perceptually, I believe the best way to do it is using an Image Hashing algorithm. You'd compute the hash of both images and you'd be able to use the hashes to get a confidence rating of how much they match.
One that I've had some success with is pHash, though I don't know how easy it would be to use with Visual C. Searching for "Geometric Hashing" or "Image Hashing" might be helpful.
Testing for strict identity is simple: Just compare every pixel in source image A to the corresponding pixel value in image B. If all pixels are identical, the images are identical.
But I guess don't want this kind of strict identity. You probably want images to be "identical" even if certain transformations have been applied to image B. Examples for these transformations might be:
changing image brightness globally (for every pixel)
changing image brightness locally (for every pixel in a certain area)
changing image saturation golbally or locally
gamma correction
applying some kind of filter to the image (e.g. blurring, sharpening)
changing the size of the image
rotation
e.g. printing an image and scanning it again would probably include all of the above.
In a nutshell, you have to decide which transformations you want to treat as "identical" and then find image measures that are invariant to those transformations. (Alternatively, you could try to revert the translations, but that's not possible if the transformation removes information from the image, like e.g. blurring or clipping the image)

Resources