I've an image (JPEG) converted to a ByteArray in ActionScript. The SWF loads another image: I want to know if the loaded image is identical to the one I converted to a ByteArray. Is it possibile/safe to compare two ByteArrays as they were two strings? Or is it any other way to do it?
It's probably possible (have you tried it?) but I would compare the images instead (i.e. using BitmapData.compare) since it's what matters (different jpeg algorithms result in different bytes, but almost identical images). Plus, comparing the images you could set quite a "natural" threshold of similarity.
Related
I'm trying to open some textures from an iPhone game that I believe are using the PVRTC format (Pictured below)
PVRTC image format?
However everything I've tried in regards to opening it has failed. The PVRTexTool won't decompress it, the program only opens files with the extension .PVR and doesn't recognise it. I've also tried using TexturePacker but it doesn't recognise it either. It's been baffling me for a few days, any help towards decompressing the file would be appreciated thanks.
I can only offer some suggestions.
iOS restricts PVRTC textures to be square and power of 2 sizes, and they will be either 2bpp or, more likely, 4bpp. Therefore if we initially assume no MIP mapping, there can thus be only a few possible sizes for the raw data. From that you might be able to deduce the size of any header data and strip that off. I think the PowerVR SDK from Imagination Tech provides decoder source code in C (or at least it did last time I checked though admittedly that was a few years ago) if you have that raw data. Also, the data might be in Morton order.
If MIP mapping is used, then I think you'll need to include the entire MIP map chain in your size calculation, but note that the small maps will be rounded up to at least 8bytes each.
I need to present some microscopy images side by side and ensure they are directly comparable by eye in a presentation. The pictures were all taken with the same exposures, gain et cetera so the underlying pixel values should be comparable.
However, the microscopy software has a nasty habit of saving the files with one of the colour channels saturated (for some reason), so I have to process the images for presentations.
Previously I'd been using a macro which processes through a folder and calls the scripting command
run("Enhance Contrast", "saturated=0.35");
But on reflection I don't think this is the proper command to call. I don't think it would produce images that are directly comparable, by eye, to each other.
I had thought that the command
run("Color Balance...");
resetMinAndMax();
would be best as it should show the full display range. But the display values shown on the histogram do vary depending on the image.
Is this appropriate for making directly comparable images or should I run a command like
setMinAndMax();
more appropriate. With 0 as minimum and an arbitrary figure as the maximum. This is driving me mad, as I keep on getting asked about whether my images are directly comparable but I simply don't know!
Usually, resetMinAndMax(); is the best way to ensure that your images are displayed consistently.
Note however that it also depends on the bit depth of your images.
8-bit grayscale images and RGB color images are displayed with a range of 0-255 upon resetMinAndMax();
For 16-bit (short) and 32-bit (float) images, the display range is calculated from the actual minimum and maximum values of the image (see the documentation of the Brightness/Contrast dialog and the resetMinAndMax() macro function)
So for 16-bit and 32-bit images, you can use the Set Display Range dialog (Set button in the B&C window) with one of the default Unsigned 16-bit range options to ensure consistent display, or the macro call:
setMinAndMax(0, 65535);
If you want to use the images in a presentation, copy them using Edit > Copy to System, or convert them to either 8-bit or RGB before saving them and inserting them in a presentation.
I'm trying to do image comparison to detect changes in a video processing application. These are two images that look identical to me, but are different according to both
http://pdiff.sourceforge.net/
and http://www.itec.uni-klu.ac.at/lire/nightly/api/net/semanticmetadata/lire/imageanalysis/LireFeature.html
Can anyone explain the difference? Eventually I need to find a library that can detect differences that doesn't have any false positives.
The two images are different.
I used GIMP (open source) to stack the two images one on top of the other and do a difference for the top layer. It showed a very faint black image, i.e. very little difference. I then used Curve to raise the tones and it revealed that what seem to be JPEG artifacts, even though the files given are PNG. I recommend GIMP and sometimes I use it instead of Photoshop.
Using GIMP to do a blink comparison between layers at 400% view, I would guess that the first image is closer to the original. The second may be saved copy of the first or from the original but saved at a lower quality setting.
It seems that the metadata has been stripped off both images (haven't done a definitive look), so no clues there.
There was a program called Unique Filer that I used for years. It is tunable and rather good. But any comparator is likely to generate a number of false positives if you tune it well enough to make sure it doesn't miss duplicates. If you only want to catch images that are very similar like this pair, then you can tune it very tightly. It is old and may not work on Windows 7 or later.
I would like to find good image checkers / comparators too. I've considered writing my own program.
I have an image captured every second from my web cam of size 720x576.
I ultimately display this in a canvas control via my server.
I convert this jpeg to bytes (31553) and upload it using WCF.
I have been debating whether to split this image into 4 smaller images and uploading them 1 after the other. When each image is uploaded it is drawn on a hidden canvas. Then once all 4 images are uploaded I update the visible canvas with the 'cached' canvas.
Will this be a better/faster way to upload the image by splitting into 4 images or will it make no difference at all?
I will write and conduct tests for this code now but thought I would set myself up to be educated as to what the done/accepted wisdom is.
Thanks
If you look from compression point of view, the total size of the four images should be more than one image. You can imagine as compressing redundant information 4 times. If you keep on dividing and compressing, you may end up sending every pixel.
Another way to look at this is from network's point of view. Many times internet bandwidth is the limiting factor, so probably sending one file would be best (as it would be smaller). In another scenario, there might be congestion in the network so multiple data streams (if you upload them in parallel and the server is multi-threaded) are more likely to get a larger chunk of the bandwidth.
My ultimate goal is to get meaningful snapshots from MP4 videos that are either 30 min or 1 hour long. "Meaningful" is a bit ambitious, so I have simplified my requirements.
The image should be crisp - non-overlapping, and ideally not blurry. Initially, I thought getting a keyframe would work, but I had no idea that keyframes could have overlapping images embedded in them like this:
Of course, some keyframe images look like this and those are much better:
I was wondering if someone might have source code to:
Take a sequence of say 10-15 continuous keyframes (jpg or png) and identify the best keyframe from all of them.
This must happen entirely programmatically. I found this paper: http://research.microsoft.com/pubs/68802/blur_determination_compressed.pdf
and felt that I could "rank" a few images based on the above paper, but then I was dissuaded by this link: Extracting DCT coefficients from encoded images and video given that my source video is an MP4. Of course, this confuses me because the input into the system is just a sequence of jpg images.
Another link that is interesting is:
Detection of Blur in Images/Video sequences
However, I am not sure if this will work for "overlapping" images.
Th first pic is from a interlaced video at scene change.The two fields belong to different scenes. De-interlacing the video will help, try the ffmpeg filter -filter:v yadif . I am not sure how yadiff works but if it extracts the two fields and scale them to original size, it would work. Another approach is to detect if the two fields(extract alternate lines and form images with half the height and diff them) are very different from each other and ignore those images.