Is it possible to strip down chroma information from a jpeg file without loss on the luma?
Ideally I'd like a smaller file-size, greyscale version of an existing and optimized image.
Assuming the scans are not interleaved, you could update the SOF marker to have one scan and then delete the 2d and 3d scans from the input stream.
Related
Here is the link (https://imgplay.zendesk.com/hc/en-us/articles/360029411991-What-is-GIF-Dithering-Option-) where it says When you save the file as GIF with dithering, it can make your GIF more natural.
How to implement Dithering for creating more natural GIF from UIImages or video frames using Objective-C or Swift?
Assuming your source image is 8-bit per channel RGB, you could use vImage. vImage doesn't have a vImageConvert_RGB88toIndexed8, but you can split your interleaved image into three 8-bit planar buffers for RGB. I don't know exactly how well this would work, but you could convert two of the three channels to Indexed2 with vImageConvert_Planar8toIndexed2 and the other channel to Indexed4 with vImageConvert_Planar8toIndexed4. That would give you the required 8-bit lookup table.
Apple have loads of Accelerate sample code projects here. Optimising Image-Processing Performance discusses converting interleaved images to planar format. If you have a known palette, Applying Color Transforms to Images with a Multidimensional Lookup Table may be a solution to quantising your image to 256 colors.
I am relatively new to computer vision and image processing. I have a single EXR file with 7 channels: 1-3 give me the RGB values, 4-6 give me the surface normals coded as RGB values and 7th channel contains depth information from the camera of the rendered image. I was wondering if there was any way to view these 7 channels separately. For eg. I would like an image showing only the depth values in grayscale. So far I haven't found a multi-channel EXR file viewer that does this for me.
Thanks for the help!
I'm looking for a possibility to convert raster images to vector data using OpenCV. There I found a function cv::findContours() which seems to be a bit primitive (more probably I did not understand it fully):
It seems to use b/w images only (no greyscale and no coloured images) and does not seem to accept any filtering/error suppresion parameters that could be helpful in noisy images, to avoid very short vector lines or to avoid uneven polylines where one single, straight line would be the better result.
So my question: is there a OpenCV possibility to vectorise coloured raster images where the colour-information is assigned to the resulting polylinbes afterwards? And how can I apply noise reduction and error suppression to such a algorithm?
Thanks!
If you want to raster image by color than I recommend you to clusterize image on some group of colors (or quantalize it) and after this extract contours of each color and convert to needed format. There are no ready vectorizing methods in OpenCV.
I'm working with lots of camera's which capture in BG bayer pattern natively.
Now, every time I record some data, I save it to the disk in the raw bayer pattern, in an avi container. The problem is, that this really adds up after a while. After one year of research, I have close to 4TB of data...
So I'm looking for a lossless codec to compress this data. I know I could use libx264 (with --qp 0), or huffYUV, dirac or jpeg2000, but they all assume you have RGB or YUV data. It's easy enough to convert the bayered data to RGB, and then compress it, but it kind of defeats the purpose of compression if you first triple the data. This would also mean that the demoasicing artefacts introduced by debayering would also be in my source data, which is also not too great. It would be nice to have a codec that can work on the bayered data directly.
Even more nice would be that the solution would involve a codec that is already supported by gstreamer (or ffmpeg), since that's what I am already using.
A rather late suggestion, maybe useful for others..
It helps to deinterleave the Bayer pattern into four quadrants and then treat that image as grayscale. The sub-images (e.g. all red pixels in top left) have half the spatial resolution, but their pixels are more highly correlated. This leads to lower residuals from predictors using nearby pixels and therefore to better compression ratios.
I've seen this reach 2-3x lossless compression on 12-bit raw camera data.
If a commercial solution is ok, check out Cineform. I've used their sdk for a custom video compressor and it works great plus they have some great tools for processing the raw video.
Or if you prefer the open source route check out Elphel JP4.
All I know about Bayer Patterns I learned from Wikipedia, but isn't conversion to RGB more of a deinterlacing than a tripling? Doesn't the resolution for red and blue go down by a factor of 4 and green by a factor of 2? If so, a lossless image compression scheme like lossless jpeg might be just the thing.
I am creating mosaic of two images based on the region matches between them using sift descriptors. The problem is when the created mosaic's size gets too large matlab runs out of memory.
Is there some way of stitching the images without actually loading the complete images in memory.
If not how do other gigapixel image generation techniques work or the panorama apps.
Determine the size of the final mosaic prior to stitching (easy to compute with the size of your input images and the homography).
Write a blank mosaic to file (not in any specific format but a sequence of bytes just as in memory)
I'm assuming you're inverse mapping the pixels from the original images to the mosaic. So, just write to file when you're trying to store the intensity of the pixel in your mosaic.
There are a few ways you can save memory:
You should use integer data types, such as uint8 for your data.
If you're stitching, you can only keep the regions of interest in memory, such as the potential overlap regions.
If none of the other works, you can spatially downsample the images using imresample, and work on the resulting smaller images.
You can potentially use distributed arrays in the parallel computing toolbox