WebP recommended settings for bulk conversion - webp

WebP seems to have an incredible number of settings you can tweak. That's great if you are converting images by hand, but is there a set of recommended default settings for WebP, or has Google published somewhere what settings they are using for Youtube?
See this talk: https://www.youtube.com/watch?v=8vJSCmIMIjI where they mention a 20% savings in file size. We'd love to see a similar decrease in file size without sacrificing (much) quality from our jpeg images, but I don't trust that I can just play with the settings for a little while and eyeball it to decide if I've degraded the quality too far...

cwebp -preset photo -q 75 input_file -o output_file.webp

Related

Are there any lossless camera image compression supported in ROS1 or ROS2?

I wanted to compress incoming raw images from the robot camera in ROS to store and upload efficiently. Raw images are huge and takes lot of space about 1 GB for 1 sec with 30fps. Need lossless compress algorithms that can be used while recording rosbag.
The standard ros image transport package supports PNG format. I should also note that if you’re using this to help with the size of bag files the standard rosbag cli tools supports two types of bagfile compression, too.

Alternatives for Error level Analysis (ELA)

I am working on Image processing with deep learning and I came on to the topic Error level Analysis which gives/shows the difference in compression level(I'm trying to show if the image as gone under multiple compressions or not) in JPEG (lossy compression).
Are there any other techniques which are similar to ELA on JPEG and techniques similar or different which can be used on PNG as well to show multiple compressions?
There cannot be, IMHO.
Since PNG compression is lossless, every decompression must result in the identical original image. Therefore, every recompression will start from the same place so no history can be retained.

Using Huffman coding to compress images taken by the iPhone camera

Im thinking to use the Huffman coding to make an app that takes pictures right from the iPhone camera and compress it. Would it be possible for the hardware to handle the complex computation and building the tree ? In other words, is it doable?
Thank you
If you mean the image files (like jpg, png, etc), then you should know that they are already compressed with algorithms specific to images. The resulting files would not huffman compress much, if at all.
If you mean that you are going to take the UIImage raw pixel data and compress it, you could do that. I am sure that the iPhone could handle it.
If this is for a fun project, then go for it. If you want this to be a useful and used app, you will have some challenges
It is very unlikely that Huffman will be better than the standard image compression used in JPG, PNG, etc.
Apple has already seen a need for better compression and implemented HEIF in iOS 11. WWDC Video about HEIF
They did a lot of work in the OS and Photos app to make sure to use HEIF locally, but if you share the photo it turns it into something anyone could use (e.g. JPG)
All of the compression they implement uses hardware acceleration. You could do this too, but the code is a lot harder than Huffman.
So, for learning and fun, it's a good project -- it might be easier to do as a Mac app instead, but for something meant to be real, it would be extremely hard to overcome the above issues.
There are 2 parts, encoding and decoding. The encoding process involves constructing a tree or a table based representation of a tree. The decoding process covers reading from huff encoding bytes and undoing a delta. It would likely be difficult to get much speed advantage in the encoding as compared to PNG, but for decoding a very effective speedup can be seen by moving the decoding logic to the GPU with Metal. You can have a look at the full source code of an example that does just that for grayscale images on github Metal Huffman.

Cobalt for Amazon Prime

We are planning to use a Cobalt port on our embedded platform to run applications like Amazon Prime along with YouTube. Is it possible to use it for applications other than YouTube?
If so, what is the expected run-time footprint of Cobalt?
Also, is there any licensing cost associated with Cobalt?
Cobalt can run any application that has been engineered to run within the Cobalt subset of HTML/CSS/WebAPI. It is unlikely that Amazon Prime Video will run out of the box on Cobalt, but it could be made to run with some effort.
Cobalt is open source, and has no licensing fee.
For YouTube, Cobalt currently takes about 130MB of CPU memory. GPU usage depends a lot on the image cache settings and UI resolution.
The binary is currently about 5MB stripped and compressed, 19MB uncompressed. Our ICU data is 2MB compressed, 6MB uncompressed. We have a couple different font options that take varying amounts of space. We can cover almost everything in 10MB of uncompressed fonts.
These numbers are all subject to significant change.

OpenCL lossless video compression

I am looking for a lossless video compression in OpenCL. It has to be lossless as it is a project requirement. Found a few lossless algorithm written in OpenCV and ffmpeg but none of them supports OpenCL encoding/decoding. I am using Apple computers and they come with ATI graphics card which does not support CUDA.
Any help would be most appreciated.
You can use x264, which already has OpenCL support, and use a CRF of 0 (lossless). I know, looks like mpeg4 is always lossy but turns out it has also a lossless mode, that most of the times works better than other lossless codecs.
avconv -i input -c:v libx264 -preset slow --opencl -crf 0 -c:a copy outvideo.mp4
OpenCL in x264 is marginally faster than plain CPU so it is not widely used. EDIT: In my system my libx264 does not accept --opencl, but I think never versions do accept that parameter. Maybe you will need a binary executable "x264" since libx264 may not expose all the underlying functionality.
It is unlikely that you will find anything that already exists implemented in OpenCL for this lossless video compression task. Your best bet would be to take something that already exists and then try to adapt that, but the basic approach of OpenCL is splitting compute tasks up into different threads that operate on small chunks of memory. You might take a look at WebM as a starting point.

Resources