I'm currently working on a project where I have an image of around 3.9gb. I want to create a google maps like view for this image (which is something LibVIPS can generate) by executing the following command:
vips-dev-8.1.1\bin\vips.exe dzsave testje-131072.tiff mydz
However when doing this some warnings are shown and after that the program crashes:
vips warning: tiff2vips: no resolution information for TIFF image "testje-131072.tiff" -- defaulting to 1 pixel per mm
vips warning: tiff2vips: no resolution information for TIFF image "testje-131072.tiff" -- defaulting to 1 pixel per mm
vips warning: vips_tracked: out of memory --- size == 48MB
Anyone got a clue what I could do to be able to process an image of this size using Vips? (Or any other library?).
I've done some investigation myself and it seems we need to have BigTiff, I've looked in the VIPS source code and saw the term BigTiff being used a number of times so I suppose it should be supported?
Some information about the image:
Width: 131072
Height: 131072
Chunks: 32x32 (4096x4096 each)
Compression: LZW
When opening the image in a tool like VLIV (Very Large Image Viewer) the image opens fine.
I'm the libvips maintainer. The vips.exe binary includes bigtiff support and should be easily able to process an image of this size. It's challenging to build yourself on Windows, perhaps a week's work, I wouldn't try to make your own unless you are very expert.
I think the problem is probably your input image. I think it is using very large tiles (4096 x 4096). libvips is having to keep two complete lines of tiles in memory, so 4096 x 131072 x 3 x 2 pixels, which is 3GB straight away.
I would remake your source image. Use smaller tiles, perhaps 512 x 512, and make sure you are writing a bigtiff image. Please open an issue on the libvips tracker if you still have problems, it's easier to debug stuff there.
https://github.com/jcupitt/libvips/issues
Edit: there's now an official 64-bit Windows build of libvips and vips.exe, it might help:
http://www.vips.ecs.soton.ac.uk/supported/current/win32/vips-dev-w64-8.1.1-2.zip
Related
I'm trying to read an animated gif with ImageMagick. The file in question is available online, located here.
My code (linked with ImageMagick/MagickWand 7) is
#include <stdlib.h>
#include <MagickWand/MagickWand.h>
int main(void){
MagickWand *magick_wand;
MagickWandGenesis();
magick_wand = NewMagickWand();
MagickReadImage(magick_wand, "animated.gif");
return 0;
}
If I run this in the debugger and move to the line right after the image is read, the process is taking up 1.4GB of memory, according to top. I've found animated gifs with similar file sizes, and they don't go anywhere near this amount of memory consumption. Unfortunately, my experience with animated gif processing is very limited, so I'm not sure what's reasonable or not.
I have a few questions: Is this reasonable? Is it a bug? Does anyone know what makes the memory consumption of one file different from another? Is there way to control the memory consumption of ImageMagick? There's apparently a file called policy.xml which can be used to specify upper memory limits, but I've set it low and still get this behavior.
If you're curious about the larger context behind this question, in real life I'm using a python library called Wand to do this in a CMS web application. If a user uploads this particular file, it causes the OOM killer to kill the app server process (the OOM limit on these machines is set fairly low).
[Update]:
I've been able to get memory limits in policy.xml to work, but I need to set both the "memory" and "map" values. Setting either low but not the other doesn't work. I'm still curious on the other points.
ImageMagick6 decompresses the entire image to memory on load and represents each pixel as a sixteen bit number. This needs a lot of memory! ImageMagick7 uses floats rather than 16 bit numbers, so it'll be twice the size again. Your GIF is 1920 x 1080 RGBA pixels and has 45 frames, so that's 1920 * 1080 * 45 * 4 * 4 bytes, or about 1.4gb.
To save memory, you can get IM to open large images via a temporary disk file. This will be easier on your RAM, but will be a lot slower.
Other image processing libraries can use less memory -- for example libvips can stream images on demand rather than loading them into RAM, and this can give a large saving. With your image and pyvips I see:
$ python3
Python 3.10.7 (main, Nov 24 2022, 19:45:47) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyvips
>>> import os, psutil
>>> process = psutil.Process(os.getpid())
>>> # n=-1 means load all frames, access="sequential" means we want to stream
>>> x = pyvips.Image.new_from_file("huge2.gif", n=-1, access="sequential")
>>> # 50mb total process size after load
>>> process.memory_info().rss
49815552
>>> # compute the average pixel value for the entire animation
>>> x.avg()
101.19390990440672
>>> process.memory_info().rss
90320896
>>> # total memory use is now 90mb
>>>
I have a large obj file of 306 mb. So I converted it into a glb file to reduce its size. The size of the file has decreased a lot to 82 mb, but it is still big. I want to make this file smaller. Is there a way? If there is, please let me know.
If you can't reduce the glb file further, let me know more effective ways to reduce the obj file. One of the things I've already done is change the obj file to json, compress, unwind and load it using pako.js. I didn't choose this method because it was too slow to decompress.
There might be, if it is the vertex-data that is causing the file to be that big. In that case you can use the DRACO compression-library to get the size down even further.
First, to test the compressor, you can run
npx gltf-pipeline -i original.glb -d --draco.compressionLevel 10 -o compressed.glb
(you need to have a current version of node.js installed for this to work)
If vertex-data was the reason for the file being that big, the compressed file should be considerably smaller than the original.
Now you have to go through some extra-steps to load the file, as the regular GLTFLoader doesn't support DRACO-compressed meshes.
Essentially, you need to import the THREE.DRACOLoader and the draco-decoder. Finally, you need to tell your GLTFLoader that you know how to handle DRACO-compression:
DRACOLoader.setDecoderPath('path/to/draco-decoder');
gltfLoader.setDRACOLoader(new DRACOLoader());
After that, you can use the GLTFLoader as before.
The only downside of this is that the decoder itself needs some resources: decoding isn't free and the decoder itself is another 320kB of data to be loaded by the browser. I think it's still worth it if it saves you megabytes of mesh-data.
I'm surprised that no one has mentioned the obvious, simple way of lossily reducing the size of a .glb file that's just a container for separate mesh and texture data:
Reduce your vertex count by collapsing adjacent vertices that are close together or coplanar, and reduce your image data by trimming out, scaling down, or using a lower bit depth for unnecessary details.
Every 2X decrease in surface polygon/pixel density should yield roughly a 4X decrease in file size.
And then, once you've removed unneeded detail, start looking at things like DRACO, basis, fewer JPEG chroma samples, and optipng.
Consider I have created an image unknown.tiff from a page of a PDF, named doc.pdf, where the exact command for conversion aren't known. The size of this image is ~ 1MB.
The exact command isn't known, but it is known that the changes are majorly on depth and density. (A subset of these two, would do too)
Now, the normal command pattern is:
convert -density 300 PDF.pdf[page-number] -depth 8 image.tiff
But this gives me a file of ~17 MB, which obviously isn't the one I am looking for. If I remove depth, then I get a file of ~34 MB, and when I remove both, I get a blurred image of 2 MB. I also removed density only, then too the results don't match (~37 MB).
Since the output size of the image unknown.tiff is so low, I've hypothesized that it might take less time to get produced.
Since the time of conversion is of great concern to me, I want to know the ways I can come to the exact command which produced unknown.tiff
I am doing image manipulation on the png images. I have the following problem. After saving an image with imwrite() function, the size of the image is increased. For example previously image is 847KB, after saving it becomes 1.20 MB. Here is a code. I just read an image and then save it, but the size is increased. I tried to set compression params but it doesn't help.
Mat image;
image = imread("5.png", -1);
vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_PNG_COMPRESSION);
compression_params.push_back(9);
compression_params.push_back(0);
imwrite("output.png",image,compression_params);
What could be a problem? Any help please.
Thanks.
PNG has several options that influence the compression: deflate compression level (0-9), deflate strategy (HUFFMAN/FILTERED), and the choice (or strategy for dynamically chosing) for the internal prediction error filter (AVERAGE, PAETH...).
It seems OpenCV only lets you change the first one, and it hasn't a good default value for the second. So, it seems you must live with that.
Update: looking into the sources, it seems that compression strategy setting has been added (after complaints), but it isn't documented. I wonder if that source is released. Try to set the option CV_IMWRITE_PNG_STRATEGY with Z_FILTERED and see what happens
See the linked source code for more details about the params.
#Karmar, It's been many years since your last edit.
I had similar confuse to yours in June, 2021. And I found out sth which might benefit others like us.
PNG files seem to have this thing called mode. Here, let's focus only on three modes: RGB, P and L.
To quickly check an image's mode, you can use Python:
from PIL import Image
print(Image.open("5.png").mode)
Basically, when using P and L you are attributing 8 bits/pixel while RGB uses 3*8 bits/pixel.
For more detailed explanation, one can refer to this fine stackoverflow post: What is the difference between images in 'P' and 'L' mode in PIL?
Now, when we use OpenCV to open a PNG file, what we get will be an array of three channels, regardless which mode that
file was saved into. Three channels with data type uint8, that means when we imwrite this array into a file, no matter
how hard you compress it, it will be hard to beat the original file if it was saved in P or L mode.
I guess #Karmar might have already had this question solved. For future readers, check the mode of your own 5.png.
I have a PHP script which is used to resize images in a user's FTP folder for use on his website.
While slow to resize, the script has completed correctly with all images in the past. Recently however, the user uploaded an album of 21-Megapixel JPEG images and as I have found, the script is failing to convert the images but not giving out any PHP errors. When I consulted various logs, I've found multiple Apache processes being killed off with Out Of Memory errors.
The functional part of the PHP script is essentially a for loop that iterates through my images on the disk and calls a method that checks if a thumbnail exists and then performs the following:
$image = new Imagick();
$image->readImage($target);
$image->thumbnailImage(1000, 0);
$image->writeImage(realpath($basedir)."/".rescale."/".$filename);
$image->clear();
$image->destroy();
The server has 512MB of RAM, with usually at least 360MB+ free.
PHP has it's memory limit set currently at 96MB, but I have set it higher before without any effect on the issue.
By my estimates, a 21-Megapixel image should occupy in the region of 80MB+ when uncompressed, and so I am puzzled as to why the RAM is disappearing so rapidly unless the Image Magick objects are not being removed from memory.
Is there some way I can optimise my script to use less memory or garbage collect more efficiently?
Do I simply not have the RAM to cope with such large images?
Cheers
See this answer for a more detailed explanation.
imagick uses a shared library and it's memory usage is out of reach for PHP, so tuning PHP memory and garbage collection won't help.
Try adding this prior to creating the new Imagick() object:
// pixel cache max size
IMagick::setResourceLimit(imagick::RESOURCETYPE_MEMORY, 32);
// maximum amount of memory map to allocate for the pixel cache
IMagick::setResourceLimit(imagick::RESOURCETYPE_MAP, 32);
It will cause imagick to swap to disk (defaults to /tmp) when it needs more than 32 MB for juggling images. It will be slower, but it will not run out of RAM (unless /tmp is on ramdisk, in that case you need to change where imagick writes its temp files).
MattBianco is nearly correct, only change is that the memory limits are in bytes so would be 33554432 for 32MB:
// pixel cache max size
IMagick::setResourceLimit(imagick::RESOURCETYPE_MEMORY, 33554432);
// maximum amount of memory map to allocate for the pixel cache
IMagick::setResourceLimit(imagick::RESOURCETYPE_MAP, 33554432);
Call $image->setSize() before $image->readImage() to have libjpeg resize the image whilst loading to reduce memory usage.
(edit), example usage: Efficient JPEG Image Resizing in PHP