How to know which command settings of ImageMagick created a particular image - imagemagick

Consider I have created an image unknown.tiff from a page of a PDF, named doc.pdf, where the exact command for conversion aren't known. The size of this image is ~ 1MB.
The exact command isn't known, but it is known that the changes are majorly on depth and density. (A subset of these two, would do too)
Now, the normal command pattern is:
convert -density 300 PDF.pdf[page-number] -depth 8 image.tiff
But this gives me a file of ~17 MB, which obviously isn't the one I am looking for. If I remove depth, then I get a file of ~34 MB, and when I remove both, I get a blurred image of 2 MB. I also removed density only, then too the results don't match (~37 MB).
Since the output size of the image unknown.tiff is so low, I've hypothesized that it might take less time to get produced.
Since the time of conversion is of great concern to me, I want to know the ways I can come to the exact command which produced unknown.tiff

Related

Can I make the glb file smaller?

I have a large obj file of 306 mb. So I converted it into a glb file to reduce its size. The size of the file has decreased a lot to 82 mb, but it is still big. I want to make this file smaller. Is there a way? If there is, please let me know.
If you can't reduce the glb file further, let me know more effective ways to reduce the obj file. One of the things I've already done is change the obj file to json, compress, unwind and load it using pako.js. I didn't choose this method because it was too slow to decompress.
There might be, if it is the vertex-data that is causing the file to be that big. In that case you can use the DRACO compression-library to get the size down even further.
First, to test the compressor, you can run
npx gltf-pipeline -i original.glb -d --draco.compressionLevel 10 -o compressed.glb
(you need to have a current version of node.js installed for this to work)
If vertex-data was the reason for the file being that big, the compressed file should be considerably smaller than the original.
Now you have to go through some extra-steps to load the file, as the regular GLTFLoader doesn't support DRACO-compressed meshes.
Essentially, you need to import the THREE.DRACOLoader and the draco-decoder. Finally, you need to tell your GLTFLoader that you know how to handle DRACO-compression:
DRACOLoader.setDecoderPath('path/to/draco-decoder');
gltfLoader.setDRACOLoader(new DRACOLoader());
After that, you can use the GLTFLoader as before.
The only downside of this is that the decoder itself needs some resources: decoding isn't free and the decoder itself is another 320kB of data to be loaded by the browser. I think it's still worth it if it saves you megabytes of mesh-data.
I'm surprised that no one has mentioned the obvious, simple way of lossily reducing the size of a .glb file that's just a container for separate mesh and texture data:
Reduce your vertex count by collapsing adjacent vertices that are close together or coplanar, and reduce your image data by trimming out, scaling down, or using a lower bit depth for unnecessary details.
Every 2X decrease in surface polygon/pixel density should yield roughly a 4X decrease in file size.
And then, once you've removed unneeded detail, start looking at things like DRACO, basis, fewer JPEG chroma samples, and optipng.

LibVIPS crashing when processing 3.9gb tiff image

I'm currently working on a project where I have an image of around 3.9gb. I want to create a google maps like view for this image (which is something LibVIPS can generate) by executing the following command:
vips-dev-8.1.1\bin\vips.exe dzsave testje-131072.tiff mydz
However when doing this some warnings are shown and after that the program crashes:
vips warning: tiff2vips: no resolution information for TIFF image "testje-131072.tiff" -- defaulting to 1 pixel per mm
vips warning: tiff2vips: no resolution information for TIFF image "testje-131072.tiff" -- defaulting to 1 pixel per mm
vips warning: vips_tracked: out of memory --- size == 48MB
Anyone got a clue what I could do to be able to process an image of this size using Vips? (Or any other library?).
I've done some investigation myself and it seems we need to have BigTiff, I've looked in the VIPS source code and saw the term BigTiff being used a number of times so I suppose it should be supported?
Some information about the image:
Width: 131072
Height: 131072
Chunks: 32x32 (4096x4096 each)
Compression: LZW
When opening the image in a tool like VLIV (Very Large Image Viewer) the image opens fine.
I'm the libvips maintainer. The vips.exe binary includes bigtiff support and should be easily able to process an image of this size. It's challenging to build yourself on Windows, perhaps a week's work, I wouldn't try to make your own unless you are very expert.
I think the problem is probably your input image. I think it is using very large tiles (4096 x 4096). libvips is having to keep two complete lines of tiles in memory, so 4096 x 131072 x 3 x 2 pixels, which is 3GB straight away.
I would remake your source image. Use smaller tiles, perhaps 512 x 512, and make sure you are writing a bigtiff image. Please open an issue on the libvips tracker if you still have problems, it's easier to debug stuff there.
https://github.com/jcupitt/libvips/issues
Edit: there's now an official 64-bit Windows build of libvips and vips.exe, it might help:
http://www.vips.ecs.soton.ac.uk/supported/current/win32/vips-dev-w64-8.1.1-2.zip

Image Averaging and Saving output

I'm planning to process quite a large number of images and would like to average every 5 consecutive images. My images are saved as .dm4 file format.
Essentially, I want to produce a single averaged image output for each 5 images that I can save. So for instance, if I had 400 images, I would like to get 80 averaged images that would represent the 400 images.
I'm aware that there's the Running Z Projector plugin but it does a running average and doesn't give me the reduced number of images I'm looking for. Is this something that has already been done before?
Thanks for the help!
It looks like the Image>Stacks>Tools>Grouped Z_Projector does exactly you want.
I found it by opening the command finder ('L') and filtering on "project".

OpenCV imwrite increases the size of png image

I am doing image manipulation on the png images. I have the following problem. After saving an image with imwrite() function, the size of the image is increased. For example previously image is 847KB, after saving it becomes 1.20 MB. Here is a code. I just read an image and then save it, but the size is increased. I tried to set compression params but it doesn't help.
Mat image;
image = imread("5.png", -1);
vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_PNG_COMPRESSION);
compression_params.push_back(9);
compression_params.push_back(0);
imwrite("output.png",image,compression_params);
What could be a problem? Any help please.
Thanks.
PNG has several options that influence the compression: deflate compression level (0-9), deflate strategy (HUFFMAN/FILTERED), and the choice (or strategy for dynamically chosing) for the internal prediction error filter (AVERAGE, PAETH...).
It seems OpenCV only lets you change the first one, and it hasn't a good default value for the second. So, it seems you must live with that.
Update: looking into the sources, it seems that compression strategy setting has been added (after complaints), but it isn't documented. I wonder if that source is released. Try to set the option CV_IMWRITE_PNG_STRATEGY with Z_FILTERED and see what happens
See the linked source code for more details about the params.
#Karmar, It's been many years since your last edit.
I had similar confuse to yours in June, 2021. And I found out sth which might benefit others like us.
PNG files seem to have this thing called mode. Here, let's focus only on three modes: RGB, P and L.
To quickly check an image's mode, you can use Python:
from PIL import Image
print(Image.open("5.png").mode)
Basically, when using P and L you are attributing 8 bits/pixel while RGB uses 3*8 bits/pixel.
For more detailed explanation, one can refer to this fine stackoverflow post: What is the difference between images in 'P' and 'L' mode in PIL?
Now, when we use OpenCV to open a PNG file, what we get will be an array of three channels, regardless which mode that
file was saved into. Three channels with data type uint8, that means when we imwrite this array into a file, no matter
how hard you compress it, it will be hard to beat the original file if it was saved in P or L mode.
I guess #Karmar might have already had this question solved. For future readers, check the mode of your own 5.png.

pipeline image compression

I have a custom made web server running that I use for scanning documents. To activate the scanner and load the image on screen, I have a scan button that links to a page with the following image tag:
<img src="http://myserver/archive/location/name.jpg?scan" />
When the server receives the request for a ?scan file it streams the output of the following command, and writes it to disk on the requested location.
scanimage --resolution 150 --mode Color | convert - jpg:-
This works well and I am happy with this simple setup. The problem is that convert (ImageMagick) buffers the output of scanimage, and spits out the jpeg image only when the scan is complete. The result of this is that the webpage is loading for a long time with the risk of timeouts. It also keeps me from seeing the image as it is scanned, which should otherwise be possible because it is exactly how baseline encoded jpeg images show up on slow connections.
My question is: is it possible to do jpeg encoding without buffering the image, or is the operation inherently global? If it is possible, what tools could I use? One thought I had is separately encoding strips of eight lines, but I do not know how to put these chunks together. If it is not possible, is there another compression format that does allow this sort of pipeline encoding? My only restriction is that the format should be supported by the mainstream browsers.
Thanks!
You want to subdivide the image with a space-filling-curve. A sfc recursivley subivide the surface in smaller tiles and because of it's fractal dimension reduce the 2d complexity to a 1d complexity. When you have subdivide the image you can you use this curve to continously scan the image. Or you can use a BFS and some sort of an image-low-frequency-detail filter to continuously scan higher resolution of your image. You want to look for Nick's spatial index hilbert curve quadtree blog but I don't think you can put the tiles together with a jpg format (cat?). Or you can continously reduce the resolution?
scanimage --resolution [1-150] --mode Color | convert - jpg:-

Resources