I recently wrote C programs for image processing of BMP images, I had to read the pixel values and process them, it was very simple I followed the BMP header contents and I could get most of the information of the BMP image.
Now the challenge is to process videos (Frame by Frame), how can I do it? How will I be able to read the headers of continuous streams of image frames in a video clip? Or else, is it like, for example, the mpeg format will also have universal header, upon reading which I can get the information about the entire video and after the header, all the data are only pixels.
I hope I could convey.
Has anyone got experience with processing videos?
Any books or links to tutorials will be very helpful.
A video stream, like MPEG, is composed by a number of frames dependent from (obviously) its duration and its frame-rate. To read a pixel you must start from what is called an Intra frame, which is not dependent from a previous frame into the stream. Any successive frame is a frame which is temporally dependent from its previous frame, so to obtain its pixel, you have to decode the stream from the Intra to the frame you want.
Note that, tipically, an intra frame is inserted periodically to give to the decoder a way to synchronize with the stream. This is very useful in a context where errors can occur.
What you want to do isn't an easy work. You have to use an MPEG decoder and then modify the frame before diplaying it, if you want to do post processing, like a filter or other.
I suggest you to study video coding, and you can find a lot of material on that, starting from the standard MPEG.
I would recommend looking into FFMpeg. It has a command line utility that can grab frames from a movie and dump them to an image like JPG. You can then modify your existing reader to handle JPG files (just use something like libjpeg to decode the JPG to a raw pixel buffer).
Alternatively, you can use the FFMpeg APIs (C, Python, other), and do the frame capture programatically and look at the pixels as you move through the video. Video formats are complex, so unless you want to start understanding all of the different codecs, you might want to grab a library to do the decode to raw pixel buffer for you.
MPEG 1/2/4 videos are much more difficult to handle than bitmaps because they are compressed. With the bitmap data, you have actual color values stored directly to the file. In MPEG, or JPEG for that matter, the color information goes through a number transformations before being written to the file. These include
RGB -> YUV 420P (sub-sampled chrominance)
Discrete Cosine Transform
Weighted quantization
zig-zag ordering
differential encoding
variable-length encoding (Huffman-like)
All of this means that there is no simple way to parse pixel data out of the file. You either have to study every minute detail of the standard and write your own decoder, or use some video decoding library like ffmpeg to do the work for you. ffmpeg can convert your video to still images (see answers this recent question). Also, you could interface directly to the ffmpeg libraries (libavformat and libavcodec). See the answers to this question for good tutorials.
Related
I have an interesting problem I need to research related to very low level video streaming.
Has anyone had any experience converting a raw stream of bytes(separated into per pixel information, but not a standard format of video) into a low resolution video stream? I believe that I can map the data into RGB value per pixel bytes, as the color values that correspond to the value in the raw data will be determined by us. I'm not sure where to go from there, or what the RGB format needs to be per pixel.
I've looked at FFMPeg but it's documentation is massive and I don't know where to start.
Specific questions I have include, is it possible to create CVPixelBuffer with that pixel data? If I were to do that, what sort of format for the per pixel data would I need to convert to?
Also, should I be looking deeper into OpenGL, and if so where would the best place to look for information on this topic?
What about CGBitmapContextCreate? For example, if I went I went with something like this, what would a typical pixel byte need to look like? Would this be fast enough to keep the frame rate above 20fps?
EDIT:
I think with the excellent help of you two, and some more research on my own, I've put together a plan for how to construct the raw RGBA data, then construct a CGImage from that data, in turn create a CVPixelBuffer from that CGImage from here CVPixelBuffer from CGImage.
However, to then play that live as the data comes in, I'm not sure what kind of FPS I would be looking at. Do I paint them to a CALayer, or is there some similar class to AVAssetWriter that I could use to play it as I append CVPixelBuffers. The experience that I have is using AVAssetWriter to export constructed CoreAnimation hierarchies to video, so the videos are always constructed before they begin playing, and not displayed as live video.
I've done this before, and I know that you found my GPUImage project a little while ago. As I replied on the issues there, the GPUImageRawDataInput is what you want for this, because it does a fast upload of RGBA, BGRA, or RGB data directly into an OpenGL ES texture. From there, the frame data can be filtered, displayed to the screen, or recorded into a movie file.
Your proposed path of going through a CGImage to a CVPixelBuffer is not going to yield very good performance, based on my personal experience. There's too much overhead when passing through Core Graphics for realtime video. You want to go directly to OpenGL ES for the fastest display speed here.
I might even be able to improve my code to make it faster than it is right now. I currently use glTexImage2D() to update texture data from local bytes, but it would probably be even faster to use the texture caches introduced in iOS 5.0 to speed up refreshing data within a texture that maintains its size. There's some overhead in setting up the caches that makes them a little slower for one-off uploads, but rapidly updating data should be faster with them.
My 2 cents:
I made an opengl game which lets the user record a 3d scene. Playback was done via replaying the scene (instead of playing a video because realtime encoding did not yield a comfortable FPS.
There is a technique which could help out, unfortunately I didn't have time to implement it:
http://allmybrain.com/2011/12/08/rendering-to-a-texture-with-ios-5-texture-cache-api/
This technique should cut down time on getting pixels back from openGL. You might get an acceptable video encoding rate.
I'm having a strange realization while working on a project I'm having.
I created a streaming solution where i stream a image with the resolution 480x640 totaling at 30’720 pixels, and every pixel contains 32bits of data and by my calculations this means that every frame totals to 1,2MB of data which means that 30fps would total to a 36MB/s line.
So to my question how does a streaming solution stream 30fps over f.ex 2mbit/s line?
I'm guessing that the same question can probably used to explain how a jpg image with a 480x640 resolution takes up <100KB
Compression is your friend.
I don't know the specifics of your solution, but a few assumptions can be made.
First off, even if you send each frame as a full frame, they should be compressed. Even lossless compression should get you some pretty good compression rates, but if you go with something lossy (like jpg) then you can get even more.
But that's not all you get. Any good video codec should provide significant compression as well. Parts of the image that don't change between frames don't need to be sent at all, and other parts can be compressed nicely too (I don't know much specifics about the compression used, but there's a lot of stuff that's done to compress it).
This all adds up to a lot of savings over sending a full 32bit bitmap for every frame.
Compression is a very broad topic. Just to get an idea, try reading the wikipedia page about image compression
As a very basic solution to your problem, I would personally jpeg-encode the first frame, then, jpeg-encode the differences between two consecutive frames.
For jpeg compression there are many libraries providing the functionality, without the need to implement it yourself.
If you are not so interested in the quality, you can also subsample the video, for example obtaining frames of resolution 240*320
I want to use OpenCV to take an input video, and reduce the file size of the video.
This would require a conversion of the frames from the initial type to some transformed type.
What conversion can I use inside OpenCV so that I take each frame of the video, reduce its size and then write back to create a new file with reduced file size without much compromising with the video quality?
OpenCV uses FFMPEG. Check this link for a detailed instructions, You can use FFMPEG independently instead of OpenCV. If you already have compressed video data, you wont be able to do much compression unless you loose quality for reduce the frame size instead.
I'm working with lots of camera's which capture in BG bayer pattern natively.
Now, every time I record some data, I save it to the disk in the raw bayer pattern, in an avi container. The problem is, that this really adds up after a while. After one year of research, I have close to 4TB of data...
So I'm looking for a lossless codec to compress this data. I know I could use libx264 (with --qp 0), or huffYUV, dirac or jpeg2000, but they all assume you have RGB or YUV data. It's easy enough to convert the bayered data to RGB, and then compress it, but it kind of defeats the purpose of compression if you first triple the data. This would also mean that the demoasicing artefacts introduced by debayering would also be in my source data, which is also not too great. It would be nice to have a codec that can work on the bayered data directly.
Even more nice would be that the solution would involve a codec that is already supported by gstreamer (or ffmpeg), since that's what I am already using.
A rather late suggestion, maybe useful for others..
It helps to deinterleave the Bayer pattern into four quadrants and then treat that image as grayscale. The sub-images (e.g. all red pixels in top left) have half the spatial resolution, but their pixels are more highly correlated. This leads to lower residuals from predictors using nearby pixels and therefore to better compression ratios.
I've seen this reach 2-3x lossless compression on 12-bit raw camera data.
If a commercial solution is ok, check out Cineform. I've used their sdk for a custom video compressor and it works great plus they have some great tools for processing the raw video.
Or if you prefer the open source route check out Elphel JP4.
All I know about Bayer Patterns I learned from Wikipedia, but isn't conversion to RGB more of a deinterlacing than a tripling? Doesn't the resolution for red and blue go down by a factor of 4 and green by a factor of 2? If so, a lossless image compression scheme like lossless jpeg might be just the thing.
This is really a two part question, since I don't fully understand how these things work just yet:
My situation: I'm writing a web app which lets the user upload an image. My app then resizes to something displayable (eg: 640x480-ish) and saves the file for use later.
My questions:
Given an arbitrary JPEG file, is it possible to tell what the quality level is, so that I can use that same quality when saving the resized image?
Does this even matter?? Should I be saving all the images at a decent level (eg: 75-80), regardless of the original quality?
I'm not so sure about this because, as I figure it: (let's take an extreme example), if someone had a 5 megapixel image saved at quality 0, it would be blocky as anything. Reducing the image size to 640x480, the blockiness would be smoothed out and barely less noticeable... until I saved it with quality 0 again...
On the other end of the spectrum, if there was an image which was 800x600 with q=0, resizing to 640x480 isn't going to change the fact that it looks like utter crap, so saving with q=80 would be redundant.
Am I even close?
I'm using GD2 library on PHP if that is of any use
You can view compress level using the identify tool in ImageMagick. Download and installation instructions can be found at the official website.
After you install it, run the following command from the command line:
identify -format '%Q' yourimage.jpg
This will return a value from 0 (low quality, small filesize) to 100 (high quality, large filesize).
Information source
JPEG is a lossy format. Every time you save a JPEG same image, regardless of quality level, you will reduce the actual image quality. Therefore even if you did obtain a quality level from the file, you could not maintain that same quality when you save a JPEG again (even at quality=100).
You should save your JPEG at as high a quality as you can afford in terms of file size. Or use a loss-less format such as PNG.
Low quality JPEG files do not simply become more blocky. Instead colour depth is reduced and the detail of sections of the image are removed. You can't rely on lower quality images being blocky and looking ok at smaller sizes.
According to the JFIF spec. the quality number (0-100) is not stored in the image header, although the horizontal and vertical pixel density is stored.
For future visitors, checking the quality of a given jpeg, you could just use imagemagick tooling:
$> identify -format '%Q' filename.jpg
92%
Jpeg compression algorithm has some parameters which influence on the quality of the result image.
One of such parameters are quantization tables which defines how many bits will be used on each coefficient. Different programs use different quatization tables.
Some programs allow user to set quality level 0-100. But there is no common defenition of this number. The image made with Photoshop with 60% quality takes 46 KB, while the image made with GIMP takes only 26 KB.
Quantization tables are also different.
There are other parameters such subsampling, dct method and etc.
So you can't describe all of them by single quality level number and you can't compare quality of jpeg images by single number. But you can create such number like photoshop or gimp which will describe compromiss between size on quality.
More information:
http://patrakov.blogspot.com/2008/12/jpeg-quality-is-meaningless-number.html
Common practice is that you resize the image to appropriate size and apply jpeg after that. In this case huge and middle images will have the same size and quality.
Here is a formula I've found to work well:
jpg100size (the size it should not exceed in bytes for 98-100% quality) = width*height/1.7
jpgxsize = jpg100size*x (x = percent, e.g. 0.65)
so, you could use these to find out statistically what quality your jpg was last saved at. if you want to get it down to let's say 65% quality and if you want to avoid resampling, you should compare the size initially to make sure it's not already too low, and only then reduce the quality
As there are already two answers using identify, here's one that also outputs the file name (for scanning multiple files at once):
If you wish to have a simple output of filename: quality for use on multiple images, you can use
identify -format '%f: %Q' *
to show the filename + compression of all files within the current directory.
So, there are basically two cases you care about:
If an incoming image has quality set too high, it may take up an inappropriate amount of space. Therefore, you might want, for example, to reduce incoming q=99 to q=85.
If an incoming image has quality set too low, it might be a waste of space to raise it's quality. Except that an image that's had a large amount of data discarded won't magically take up more space when the quality is raised -- blocky images will compress very nicely even at high quality settings. So, in my opinion it's perfectly OK to raise incoming q=1 to q=85.
From this I would think simply forcing a decent quality setting is a perfectly acceptable thing to do.
Every new save of the file will further decrease overall quality, by using higher quality values you will preserve more of image. Regardless of what original image quality was.
If you resave a JPEG using the same software that created it originally, using the same settings, you'll find that the damage is minimized - the algorithm will tend to throw out the same information it threw out the first time. I don't think there's any way to know what level was selected just by looking at the file; even if you could, different software almost guarantees different parameters and rounding, making a match almost impossible.
This may be a silly question, but why would you be concerned about micromanaging the quality of the document? I believe if you use ImageMagick to do the conversion, it will manage the quality of the JPEG for you for best effect. http://www.php.net/manual/en/intro.imagick.php
Here are some ways to achieve your (1) and get it right.
There are ways to do this by fitting to the quantization tables. Sherloq - for example - does this:
https://github.com/GuidoBartoli/sherloq
The relevant (python) code is at https://github.com/GuidoBartoli/sherloq/blob/master/gui/quality.py
There is another algorithm written up in https://arxiv.org/abs/1802.00992 - you might consider contacting the author for any code etc.
You can also simulate file_size(image_dimensions,quality_level) and then invert that function/lookup table to get quality_level(image_dimensions,file_size). Hey presto!
Finally, you can adopt a brute-force https://en.wikipedia.org/wiki/Error_level_analysis approach by calculating the difference between the original image and recompressed versions each saved at a different quality level. The quality level of the original is roughly the one for which the difference is minimized. Seems to work reasonably well (but is linear in the for-loop..).
Most often the quality factor used seems to be 75 or 95 which might help you to get to the result faster. Probably no-one would save a JPEG at 100. Probably no-one would usefully save it at < 60 either.
I can add other links for this as they become available - please put them in the comments.
If you trust Irfanview estimation of JPEG compression level you can extract that information from the info text file created by the following Windows line command (your path to i_view32.exe might be different):
"C:\Program Files (x86)\IrfanView\i_view32.exe" <image-file> /info=txtfile
Jpg compression level is recorded in the IPTC data of an image.
Use exiftool (it's free) to get the exif data of an image then do a search on the returned string for "Photoshop Quality". Or at least put the data returned into a text document and check to see what's recorded. It may vary depending on the software used to save the image.
"Writer Name : Adobe Photoshop
Reader Name : Adobe Photoshop CS6
Photoshop Quality : 7"