I have an embedded application where an image scanner sends out a stream of 16-bit pixels that are later assembled to a grayscale image. As I need to both save this data locally and forward it to a network interface, I'd like to compress the data stream to reduce the required storage space and network bandwidth.
Is there a simple algorithm that I can use to losslessly compress the pixel data?
I first thought of computing the difference between two consecutive pixels and then encoding this difference with a Huffman code. Unfortunately, the pixels are unsigned 16-bit quantities so the difference can be anywhere in the range -65535 .. +65535 which leads to potentially huge codeword lengths. If a few really long codewords occur in a row, I'll run into buffer overflow problems.
Update: my platform is an FPGA
PNG provides free, open-source, lossless image compression in a standard format using standard tools. PNG uses zlib as part of its compression. There is also a libpng. Unless your platform is very unusual, it should not be hard to port this code to it.
How many resources do you have available on your embedded platform?
Could you port zlib and do gzip compression? Even with limited resources, you should be able to port something like LZ77 or LZ88.
There are a wide variety of image compression libraries available. For example, this page lists nothing but libraries/toolkits for PNG images. Which format/library works best for you will most likely depend on the particular resource constraints you are working under (in particular, whether or not your embedded system can do floating-point arithmetic).
The goal with lossless compression is to be able to predict the next pixel based on previous pixels, and then to encode the difference between your prediction and the real value of the pixel. This is what you initial thought to do, but you were only using the one previous pixel and making the prediction that the next pixel would be the same.
Keep in mind that if you have all of the previous pixels, you have more relevant information than just the preceding pixel. That is, if you are trying to predict the value of X, you should use the O pixels:
..OOO...
..OX
Also, you would not want to use the previous pixel, B, in the stream to predict X in the following situation:
OO...B <-- End of row
X <- Start of next row
Instead you would make your prediction base on the Os.
How 'lossless' do you need?
If this is a real scanner there is a limit to the bandwidth/resolution so even if it can send +/-64K values it may be unphysical for adjacent pixels to have a difference of more than say 8 bits.
In which case you can do a start pixel value for each row and then do differences between each pixel.
This will smear out peaks but it may be that any peaks more than 'N'bits are noise anyway.
A good LZ77/RLE hybrid with bells and wwhistles can get wonderful compression that is fairly quick to decompress. They will also be bigger, badder compressors on smaller files due to the lack of library overhead. For a good, but GPLd implentation of this, check out PUCrunch
Related
I had a discussion with a colleague recently on the topic of high-bit encodings vs. low bit encodings of 12 bit images in 16 bits of memory (e.g. PNG files). He argued that high-bit encodings were easier to use, since many image viewers (e.g. windows image preview, explorer thumbnails, ...) can display them more easily in a human-readable way using trivial 16-to-8 conversions, while low-bit encoded images appear mostly black.
I'm thinking more from an image processing perspective and thought that surely, low-bit encodings are better, so I sat down, did some experimentation and wrote out my findings. I'm curious if the community here has additional or better insights that I am missing.
When using some image processing backend (e.g. Intel IPP), 10 and 12 bit encodings are often assumed to physically take 16 bits (unsigned short). The memory is read as "a number", so that a gray value of 1 (12 bit) encoded in the low bits yields a "1", while encoded in the high bits it yields a 16 (effectively, left shift by four).
Taking a low-bit image and the corresponding left shifted high-bit image and performing some operations (possibly including interpolation) will yield identical results after right-shifting the result of the high-bit input (for comparison's sake).
The main differences come when looking at the histograms of the images: the low-bit histogram is "dense" and 4k entries long, the high-bit histogram contains 15 zeros for every non-zero entry, and is 65k entries long.
3 a) Generating lookup tables (LUTs) for operations takes 16x as long, applying them should take identical time.
3 b) Operations scaling with histrogram^2 (e.g. gray scale cooccurrence matrices) become much more costly: 256x memory and time if done naively (but then again, in order to correctly treat any potential interpolated pixels with values not allowed in the original 12 bit encoding correctly, you kind of have to...)
3 c) Debugging histogram vectors is a PITA when looking at mostly-zero interspersed high-bit histograms.
To my great surprise, that was all I could come up with. Anything obvious that I am missing?
I am doing segmentation via deep learning in pytorch. My dataset is a .raw/.mhd format ultrasound images.
I want to input my dataset into the system via data loader.
I faced few important questions:
Does changing the format of the dataset to either .png or .jpg make the segmentation inaccurate?(I think I lost some information in this way!)
Which format is less data lossy?
How should I make a dumpy array if I don't convert the original image format, i.e., .raw/.mhd?
How should I load this dataset?
Knowing nothing about raw and mhd formats, I can give partial answers.
Firstly, jpg is lossy and png is not. So, you're surely losing information in jpg. png is lossless for "normal" images - 1, 3 or 4 channel, with 8 bit precision in each (perhaps also 16 bits are also supported, don't quote me on that). I know nothing about ultrasound images, but if they use higher precision than that, even png will be lossy.
Secondly, I don't know what mhd is and what raw means in the context of ultrasound images. That being said, a simple google search reveals some package for reading the former to numpy.
Finally, to load the dataset, you can use the ImageFolder class from torchvision. You need to write a custom function which loads an image given its path (for instance using the package mentioned above) and pass it to the loader keyword argument.
Say, I have a sequence on .dicom files in a folder. The cumulative size is about 100 Mb. It's a lot of data. I tried to convert data into .nrrd and .nii, but those files had the summary size of the converted .dicom files (which is fairly predictable, though .nrrd was compressed with gzip). I'd like to know, if there a file format that would give me far less sizes, or just a way to solve that. Perhaps, .vtk, or something else (not sure it qould work). Thanks in advance.
DICOM supports compression of the pixel data within the file itself. The idea of DICOM is that it's format agnostic from the point of view of the pixel data it holds.
DICOM can hold raw pixel data and also can hold JPEG-compressed pixel data, as well as many other formats. The transfer syntax tag of the DICOM file gives you the compression protocol of the pixel data within the DICOM.
The first thing is to figure out whether you need lossless or lossy compression. If lossy, there are a lot of options, and the compression ratio is quite high in some - the tradeoff is that you do lose fidelity and the images may not be adequate for diagnostic purposes. There are also lossless compression schemes - like JPEG2000, RLE and even JPEG-LS. These will compress the pixel data, but retain diagnostic quality without any image degradation.
You can also zip the files, which, if raw, should produce very good results. What are you looking to do w/ these compressed DICOMs?
I am using the Jtransforms library which seems to be wicked fast for my purpose.
At this point I think I have a pretty good handle on how FFT works so now I am wondering if there is any form of a standard domain which is used for audio visualizations like spectograms?
Thanks to android's native FFT in 2.3 I had been using bytes as the range although I am still unclear as to whether it is signed or not. (I know java doesn't have unsigned bytes, but Google implemented these functions natively and the waveform is PCM 8bit unsigned)
However I am adapting my app to work with mic audio and 2.1 phones. At this point having the input domain being in the range of bytes whether it is [-128, 127] or [0, 255] no longer seems quite optimal.
I would like the range of my FFT function to be [0,1] so that I can scale it easily.
So should I use a domain of [-1, 1] or [0, 1]?
Essentially, the input domain does not matter. At most, it causes an offset and a change in scaling on your original data, which will be turned into an offset on bin #0 and an overall change in scaling on your frequency-domain results, respectively.
As to limiting your FFT output to [0,1]; that's essentially impossible. In general, the FFT output will be complex, there's no way to manipulate your input data so that the output is restricted to positive real numbers.
If you use DCT instead of FFT your output range will be real. (Read about the difference and decide if DCT is suitable for your application.)
FFT implementations for real numbers (as input domain) use half the samples for the output range (since there are only even results when the input is real), therefore the fact you have both real and imaginary parts for each sample doesn't effect the size of the result (vs the size of the source) much (output size is ceil(n/2)*2).
Note - may be more related to computer organization than software, not sure.
I'm trying to understand something related to data compression, say for jpeg photos. Essentially a very dense matrix is converted (via discrete cosine transforms) into a much more sparse matrix. Supposedly it is this sparse matrix that is stored. Take a look at this link:
http://en.wikipedia.org/wiki/JPEG
Comparing the original 8x8 sub-block image example to matrix "B", which is transformed to have overall lower magnitude values and much more zeros throughout. How is matrix B stored such that it saves much more memory over the original matrix?
The original matrix clearly needs 8x8 (number of entries) x 8 bits/entry since values can range randomly from 0 to 255. OK, so I think it's pretty clear we need 64 bytes of memory for this. Matrix B on the other hand, hmmm. Best case scenario I can think of is that values range from -26 to +5, so at most an entry (like -26) needs 6 bits (5 bits to form 26, 1 bit for sign I guess). So then you could store 8x8x6 bits = 48 bytes.
The other possibility I see is that the matrix is stored in a "zig zag" order from the top left. Then we can specify a start and an end address and just keep storing along the diagonals until we're only left with zeros. Let's say it's a 32-bit machine; then 2 addresses (start + end) will constitute 8 bytes; for the other non-zero entries at 6 bits each, say, we have to go along almost all the top diagonals to store a sum of 28 elements. In total this scheme would take 29 bytes.
To summarize my question: if JPEG and other image encoders are claiming to save space by using algorithms to make the image matrix less dense, how is this extra space being realized in my hard disk?
Cheers
The dct needs to be accompanied with other compression schemes that take advantage of the zeros/high frequency occurrences. A simple example is run length encoding.
JPEG uses a variant of Huffman coding.
As it says in "Entropy coding" a zig-zag pattern is used, together with RLE which will already reduce size for many cases. However, as far as I know the DCT isn't giving a sparse matrix per se. But it usually enhances the entropy of the matrix. This is the point where the compressen becomes lossy: The intput matrix is transferred with DCT, then the values are quantizised and then the huffman-encoding is used.
The most simple compression would take advantage of repeated sequences of symbols (zeros). A matrix in memory may look like this (suppose in dec system)
0000000000000100000000000210000000000004301000300000000004
After compression it may look like this
(0,13)1(0,11)21(0,12)43010003(0,11)4
(Symbol,Count)...
As my under stand, JPEG on only compress, it also drop data. After the 8x8 block transfer to frequent domain, it drop the in-significant (high-frequent) data, which means it only has to save the significant 6x6 or even 4x4 data. That it can has higher compress rate then non-lost method (like gif)