I have the following lines of code in a function. which reads the image from Amazon S3. Image size which i am reading is of 1.37 MB where as when i ran the profiler it says read function in image magick library takes 5.6 mb which is very high. Can anyone explain this behaviour? I am attaching the snapshot of my profiler as well as code.
AmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(
accessKey,
secretKey
);
GetObjectRequest request = new GetObjectRequest
{
BucketName = bucketName,
Key = keyName
};
var response = client.GetObject(request);
MagickImage imgStream = new MagickImage(response.ResponseStream);
Your image size on disk is not important for the size of the image in memory. The amount of memory that is necessary is related to the dimensions (width/height) of your image. When the image is loaded the raw data is 'converted' to pixel data. For each channel Magick.NET will use either 8 or 16 bit per pixel (Q8/Q16). So when you have an image that is 4 channels (RGBA) and you are using the Q16 version of Magick.NET you will use 64-bits per pixel. For an image of 1920x1080 you will need 1920*1080*64 = 132710400 bits, and that is around 16.5 Megabytes. The size on disk will be smaller most of the times because most image formats compress the pixel data when they save it to disk.
Related
So I want to extract data from png file in which those params are always the same:
Bit depth: 8
Color type: 6
Compression method: 0
Filter method: 0
Interlace method: 0
What I want is array of all pixels as rgba. I already have IDAT chunk extracted but I really don't know what I should do next.
According to libpng I have to reverse creating image data process.
As I understand, I have to decompress chunk content, reverse the filtering process and I should get truecolor pixels representation but I really don't know how to decompress it and reverse the filtering process.
OpenCV 2.4 VideoWriter couldn't save video files larger than 2GB, since it only accepts .avi files, I am wondering if this is still the case in OpenCV 3.0, or if it can save other kind of video files that doesn't have this limitations.
I tried to find any documentations pointing to a limit of 2GB or a release note saying it's capable to handle larger files, but I can't find none.
Even though the OpenCV 3.0-beta documentation states otherwise, OpenCV 3.0's VideoWriter seems to handle other file formats, such as mkv, as shown in this issue.
I adapted the code from the above issue to generate a 4GB mkv video (4096 frames of random 2048x2048).
The things to be aware is that the image size should be passed as width then height in the VideoWriter whereas the numpy array should be initialized with height then width. VideoWriter will fail silently otherwise.
You will also require a recent OpenCV 3.0 source to handle uncompressed streams.
This is not OpenCV limitation. AVI file size cannot be larger than 2 GB due to format limitations (4-byte size signed integer has max value 2,147,483,647).
Is it possible to pack video in another container with OpenCV (mkv etc)?
the RIFF header has the following form:
'RIFF' fileSize fileType (data)
where 'RIFF' is the literal FOURCC code 'RIFF',
fileSize is a 4-byte value giving the size of the data in the file,
and fileType is a FOURCC that identifies the specific file type.
I'm currently working on a project where I have an image of around 3.9gb. I want to create a google maps like view for this image (which is something LibVIPS can generate) by executing the following command:
vips-dev-8.1.1\bin\vips.exe dzsave testje-131072.tiff mydz
However when doing this some warnings are shown and after that the program crashes:
vips warning: tiff2vips: no resolution information for TIFF image "testje-131072.tiff" -- defaulting to 1 pixel per mm
vips warning: tiff2vips: no resolution information for TIFF image "testje-131072.tiff" -- defaulting to 1 pixel per mm
vips warning: vips_tracked: out of memory --- size == 48MB
Anyone got a clue what I could do to be able to process an image of this size using Vips? (Or any other library?).
I've done some investigation myself and it seems we need to have BigTiff, I've looked in the VIPS source code and saw the term BigTiff being used a number of times so I suppose it should be supported?
Some information about the image:
Width: 131072
Height: 131072
Chunks: 32x32 (4096x4096 each)
Compression: LZW
When opening the image in a tool like VLIV (Very Large Image Viewer) the image opens fine.
I'm the libvips maintainer. The vips.exe binary includes bigtiff support and should be easily able to process an image of this size. It's challenging to build yourself on Windows, perhaps a week's work, I wouldn't try to make your own unless you are very expert.
I think the problem is probably your input image. I think it is using very large tiles (4096 x 4096). libvips is having to keep two complete lines of tiles in memory, so 4096 x 131072 x 3 x 2 pixels, which is 3GB straight away.
I would remake your source image. Use smaller tiles, perhaps 512 x 512, and make sure you are writing a bigtiff image. Please open an issue on the libvips tracker if you still have problems, it's easier to debug stuff there.
https://github.com/jcupitt/libvips/issues
Edit: there's now an official 64-bit Windows build of libvips and vips.exe, it might help:
http://www.vips.ecs.soton.ac.uk/supported/current/win32/vips-dev-w64-8.1.1-2.zip
I have a 16bit grayscale image. I have tried both .png and .tif. .tif works somewhat. I have the following code:
CGDataProviderRef l_Img_Data_Provider = CGDataProviderCreateWithFilename( [m_Name cStringUsingEncoding:NSASCIIStringEncoding] );
CGImageRef l_CGImageRef = CGImageCreate( m_Width, m_Height, 16, 16, m_Width * 2,
CGColorSpaceCreateDeviceGray(), kCGBitmapByteOrder16Big, l_Img_Data_Provider, NULL, false, kCGRenderingIntentDefault );
test_Image = [[UIImage alloc] initWithCGImage:l_CGImageRef];
[_test_Image_View setImage:test_Image];
This results in the following image:
faulty gradient
As you can see, there seems to be an issue at the beginning of the image ( could it be trying to use the byte data from the header? ), and the image is offset by about a fifth ( a little harder to see, look at the left and the right, there is a faint line about a fifth away from the right.
My goal is to convert this to a metal texture and use it from there. Also having issues there. Seem like a byte order issue but maybe we can come back to that.
dave
CGDataProvider doesn't know about the format of the data that it stores. It is just meant for handling generic data:
"The CGDataProvider header file declares a data type that supplies
Quartz functions with data. Data provider objects abstract the
data-access task and eliminate the need for applications to manage
data through a raw memory buffer."
CGDataProvider
Because CGDataProvider is generic you must provide the format of the image data using the CGImageCreate parameters. PNGs and JPGs have their own CGImageCreateWith.. functions for handling encoded data.
The CGImage parameters in your example correctly describe a 16 bit grayscale raw byte format but nothing about TIF encoding so I would guess you are correct in guessing that the corrupted pixels you are see are from the file headers.
There may be other ways to load a 16 bit grayscale image on iOS, but to use that method (or the very similar Metal method) you would need to parse the image bytes from the TIF file and pass that into the function, or create another way to store and parse the image data.
I have a PHP script which is used to resize images in a user's FTP folder for use on his website.
While slow to resize, the script has completed correctly with all images in the past. Recently however, the user uploaded an album of 21-Megapixel JPEG images and as I have found, the script is failing to convert the images but not giving out any PHP errors. When I consulted various logs, I've found multiple Apache processes being killed off with Out Of Memory errors.
The functional part of the PHP script is essentially a for loop that iterates through my images on the disk and calls a method that checks if a thumbnail exists and then performs the following:
$image = new Imagick();
$image->readImage($target);
$image->thumbnailImage(1000, 0);
$image->writeImage(realpath($basedir)."/".rescale."/".$filename);
$image->clear();
$image->destroy();
The server has 512MB of RAM, with usually at least 360MB+ free.
PHP has it's memory limit set currently at 96MB, but I have set it higher before without any effect on the issue.
By my estimates, a 21-Megapixel image should occupy in the region of 80MB+ when uncompressed, and so I am puzzled as to why the RAM is disappearing so rapidly unless the Image Magick objects are not being removed from memory.
Is there some way I can optimise my script to use less memory or garbage collect more efficiently?
Do I simply not have the RAM to cope with such large images?
Cheers
See this answer for a more detailed explanation.
imagick uses a shared library and it's memory usage is out of reach for PHP, so tuning PHP memory and garbage collection won't help.
Try adding this prior to creating the new Imagick() object:
// pixel cache max size
IMagick::setResourceLimit(imagick::RESOURCETYPE_MEMORY, 32);
// maximum amount of memory map to allocate for the pixel cache
IMagick::setResourceLimit(imagick::RESOURCETYPE_MAP, 32);
It will cause imagick to swap to disk (defaults to /tmp) when it needs more than 32 MB for juggling images. It will be slower, but it will not run out of RAM (unless /tmp is on ramdisk, in that case you need to change where imagick writes its temp files).
MattBianco is nearly correct, only change is that the memory limits are in bytes so would be 33554432 for 32MB:
// pixel cache max size
IMagick::setResourceLimit(imagick::RESOURCETYPE_MEMORY, 33554432);
// maximum amount of memory map to allocate for the pixel cache
IMagick::setResourceLimit(imagick::RESOURCETYPE_MAP, 33554432);
Call $image->setSize() before $image->readImage() to have libjpeg resize the image whilst loading to reduce memory usage.
(edit), example usage: Efficient JPEG Image Resizing in PHP