We are planning to use a Cobalt port on our embedded platform to run applications like Amazon Prime along with YouTube. Is it possible to use it for applications other than YouTube?
If so, what is the expected run-time footprint of Cobalt?
Also, is there any licensing cost associated with Cobalt?
Cobalt can run any application that has been engineered to run within the Cobalt subset of HTML/CSS/WebAPI. It is unlikely that Amazon Prime Video will run out of the box on Cobalt, but it could be made to run with some effort.
Cobalt is open source, and has no licensing fee.
For YouTube, Cobalt currently takes about 130MB of CPU memory. GPU usage depends a lot on the image cache settings and UI resolution.
The binary is currently about 5MB stripped and compressed, 19MB uncompressed. Our ICU data is 2MB compressed, 6MB uncompressed. We have a couple different font options that take varying amounts of space. We can cover almost everything in 10MB of uncompressed fonts.
These numbers are all subject to significant change.
Related
I want to upload a dataset to google drive but it's size is so big. It takes a lot of time and uses a lot of data! Does compressing the dataset to a zip file reduce the size of the file for uploading to google drive?
I searched about reducing file sizes but most of them are about reducing the quality of images or cutting the white space around the pictures. But what can we do with datasets?
If you can get a good compression ratio with your data set, then yes, zipping should decrease the upload time. But ... it all depends on the nature of your data set.
For example, if you have files that are already compressed or have images/videos (which already have compression applied) there is a good chance you will either get very little compression or end up with a larger amount of data to upload.
Best thing is to try an experiment with a representative sample of your data set to see if you get a good compression ratio.
The other point to consider is the cost of carrying out the compression. If your sole intention is to decrease the upload time (and not to save on the amount of data stored on Google Drive), your total time cost now becomes the amount of time taken to compress the data plus the upload time. Again - try an experiment with a representative sub-set if your data to see if there is a benefit.
I wanted to compress incoming raw images from the robot camera in ROS to store and upload efficiently. Raw images are huge and takes lot of space about 1 GB for 1 sec with 30fps. Need lossless compress algorithms that can be used while recording rosbag.
The standard ros image transport package supports PNG format. I should also note that if you’re using this to help with the size of bag files the standard rosbag cli tools supports two types of bagfile compression, too.
I'm looking mediaconvert service from aws to transcode videos. The value I'm trying to set just now is quality level (QL) for QVBR, according with this it could depends on the platform, for example for 720p/1080p resolution it proposes QL=8/9 (for TV), QL=7 (for tablet), QL=6 (for smartphone).
In fact, the app have a version for the 3 type of devices then I'm asking: I need to keep 3 versions for the same video? I want to save some money in streaming and my app has similar number of users using it in each platform, I want to save in bandwidth but providing good quality videos
Higher QVBR quality levels (QL) correspond to higher bitrates in the output.
For a large display such as a TV, a higher QVBR QL is recommended to help improve the viewer experience. But when viewing the same content on a smaller display such as a phone you may not need all of those extra bits to still have a good experience.
In general, it's recommended to create an output targeted for each of the various devices or resolutions content will be viewed on. This will help save bandwidth for the smaller devices while still delivering high quality for the larger ones.
This concept is referred to as Adaptive Bitrate (ABR) Streaming, and is a common feature of streaming formats such as HLS and DASH (among others). The MediaConvert documentation has a section on how to create ABR outputs as well: https://docs.aws.amazon.com/mediaconvert/latest/ug/video-abr-streaming-outputs.html
I want to do some GPU computing with an NVIDIA card, and am deciding between having a GTX 960 with a 2GB or 4GB ram. Which one should I take? How much difference would these make in terms of the batch size I can use for mini batch gradient descent? Would this difference be significant?
Thank you for answers.
One of the most costly operations is coping data to/from the GPU device. Therefore, if you anticipate working with datasets >2GB, the larger mem will be of great benefit. You could either store large chunks of data (some multiple of minibatch size) at a time, and/or possibly store the entire heldout dataset if frequent evaluation is necessary. Of course, you could always use async copy to/from gpu (if device supports it) or other optimizations and certainly do fine with the smaller mem; however, this adds some additional complexity and any toolkits you use (if applicable) may not take advantage of this feature.
WebP seems to have an incredible number of settings you can tweak. That's great if you are converting images by hand, but is there a set of recommended default settings for WebP, or has Google published somewhere what settings they are using for Youtube?
See this talk: https://www.youtube.com/watch?v=8vJSCmIMIjI where they mention a 20% savings in file size. We'd love to see a similar decrease in file size without sacrificing (much) quality from our jpeg images, but I don't trust that I can just play with the settings for a little while and eyeball it to decide if I've degraded the quality too far...
cwebp -preset photo -q 75 input_file -o output_file.webp