I want to parse PNG file into chunks.
I know that if the file compressed the IDAT chunk will be affected, will it affect other chunks (Ancillary Chunks) ?
Read about the compression here and seems that other chunks will not be affected.
Related
So I want to extract data from png file in which those params are always the same:
Bit depth: 8
Color type: 6
Compression method: 0
Filter method: 0
Interlace method: 0
What I want is array of all pixels as rgba. I already have IDAT chunk extracted but I really don't know what I should do next.
According to libpng I have to reverse creating image data process.
As I understand, I have to decompress chunk content, reverse the filtering process and I should get truecolor pixels representation but I really don't know how to decompress it and reverse the filtering process.
To expand my question, I have an NSString that contains a URL to a video file that weighs 2MB. I use dataWithContentsOfURL to store it in the form of NSData. I added a breakpoint and checked out the size of the data. It was way damn high (more than 12MB), just took the bytes and did math on it (data/1024/1024).
Then I save the data as a video file with UISaveVideoAtPathToSavedPhotosAlbum and I used attributesOfItemAtPath to get FileSize attribute of that saved file. It is showing up correctly (2MB). All I want to know is, how the objects are encoded. Why this drastic change in size (in bytes) when converting URL to a data file.
All I want to know is, how the objects are encoded. Why this drastic change in size (in bytes) when converting URL to a data file.
An NSData is not "encoded" in any way, the bytes stored in one are the raw bytes. There is some overhead in the structure, but this is small.
Reading in a 2MB file will take a little more than 2MB, the overhead is small enough that you might not notice it in the memory usage. You can read much larger files than this and see no significant memory usage growth above the file size.
Your increase in memory usage is due to something else. Accidental multiple copies of the NSData? Maybe you are unintentionally uncompressing compressed video? Etc. You'll have to hunt for the cause.
Considering Google can't even do it, I'm assuming the answer is "No"?
I just went through the basic suggestions from audreyr's "Favicon Cheat Sheet" and created a favicon.ico file consisting of two optimized png files using ImageMagick like so:
$convert favicon-16.png favicon-32.png favicon.ico
My favicon-16.png file was 137 bytes after optimizing with optipng and my favicon-32.png file was 144 bytes after optimization.
So you can understand my surprise when the combined favicon.ico file created by ImageMagick ended up being 5,430 bytes. Coincidentally, that's the exact same size as Google's official favicon.ico file.
Is 5,430 bytes the absolute minimum size for any true image/x-icon file?
That seems a little excessive when realistically every single browser accessing my favicon.ico file will be extracting the 144 byte 32x32 png version.
If the source favicon-16.png and favicon-32.png are truecolor (RGB888 or RGBA8888), ImageMagick will write an uncompressed 5430-byte ICO file. However, if they are indexed-color (i.e., in PNG8 format) or grayscale the ICO may be smaller (I observe 3638-byte ICO files in these cases).
The images are stored within the ICO in BMP format, not PNG (only 256x256 images get stored in PNG format inside the ICO).
Let's suppose I have a chunk of deflated data (as in the compressed data right after the first 2 bytes and before the ADLER32 check in a zlib compressed file)
How can I know the length of that chunk? How can I find where it ends?
You would need to get that information from some metadata external to the zlib block. (The same is true of the uncompressed size.) Or you could decompress and see where you end up.
Compressed blocks in deflate format are self-delimiting, so the decoder will terminate at the correct point (unless the datastream has been corrupted).
Most file storage and data transmission formats which include compressed data make this metadata available. But since it is not necessary for decompression, it is not stored in the compressed stream.
The only way to find where it ends is to decompress it. deflate streams are self-terminating.
I am trying to store image data from a file into a PostgreSQL database as a base64 string that is compressed by gzip to save space. I am using the following code to encode the image:
#file = File.open("#{Rails.root.to_s}/public/" << #ad_object.image_url).read
#base64 = Base64.encode64(#file)
#compressed = ActiveSupport::Gzip.compress(#base64)
#compressed.force_encoding('UTF-8')
#ad_object.imageData = #compressed
When I try to save the object, I get the following error:
ActiveRecord::StatementInvalid (PG::Error: ERROR: invalid byte sequence for encoding "UTF8": 0x8b
In the rails console, any gzip compression is outputting the data as ASCII 8-BIT encoding. I have tried to set my internal and external encodings to UTF-8 but the results have not changed. How can I get this compressed data into a UTF-8 string?
This doesn't make much sense for a number of reasons.
gzip is a binary encoding. There's absolutely no point base64-encoding something then gzipping it, since the output is binary and base64 is only for transmitting over non-8bit-clean protocols. Just gzip the file directly.
Most image data is already compressed with a codec like PNG or JPEG that is much more efficient at compression of image data than gzip is. Gzipping it will usually make the image slightly bigger. Gzip will never be as efficient for image data as the loss-les PNG format, so if your image data is uncompressed, PNG compress it instead of gzipping it.
When representing binary data there isn't really a text encoding concern, because it isn't text. It won't be valid utf-8, and trying to tell the system it is will just cause further problems.
Do away entirely with the base64 encoding and gzip steps. As mu is too short says, just use the Rails binary field and let Rails handle the encoding and sending of the binary data.
Just use bytea fields in the database and store the PNG or JPEG images directly. These are hex-encoded on the wire for transmission, which takes 2x the space of the binary, but they're stored on disk in binary form. PostgreSQL automatically compresses bytea fields on disk if they benefit from compression, but most image data won't.
To minimize the size of the image, choose an appropriate compression format like PNG for lossless compression or JPEG for photographs. Downsample the image as much as you can before compression, and use the strongest compression that produces acceptable quality (for lossy codecs like JPEG). Do not attempt to further compress the image with gzip/LZMA/etc, it'll achieve nothing.
You'll still have the data double in size when transmitted as hex escapes over the wire. Solving that requires either the use of the PostgreSQL binary protocol (difficult and complicated) or a binary-clean side-band to transmit the image data. If the Pg gem supports SSL compression you can use it to compress the protocol traffic, which will reduce the cost of the hex escaping considerably.
If keeping the size down to the absolute minimum is necessary, I would not use the PotsgreSQL wire protocol to send the images to the device. It's designed for performance and reliability more than absolutely minimum size.