scale intensity of MP4 to counts originally collected by the camera - image-processing

I have data that I saved as mp4 files and contain intensity for a fluorescent signal, however because they were exported in mp4 format the intensity shown in the image is less than in the original data.
Is there any simple way to maybe multiply a factor to the mp4 data to obtain an approximation of the original counts? Thanks!!

Related

Save cv::Mat images as raw yuv file

I have a video of several cv::Mat RGB frames that I need to store as videos with lossless encoding on Jetson Xavier NX. In this process, I need to convert the RGB frames to YUV444 and then use Jetson's inbuilt lossless encoding tools to encode to h264/h265 and then save in an mp4 file format.
I can read cv::Mat frame and then convert it to the YUV444 pixel format (either manually element-wise or using OpenCV's inbuilt converter). Currently, I am writing the YUV444 frames to a binary file per frame using std::ofstream(filename, std::ios::binary). However, when I encode the yuv file to h264 or h265, I get very bad results. Is there something I am missing?
Is there a better method to save a video into a yuv file (such that I can use the encoder on the entire video).
-->> Original image in yuv444 space
-->> Image in the encoded video

How to match pixel data obtained by capture card with original pixel data

I have two PC. Two computers is connected by capture card. My purpose is to enable PC1 to obtain PC2's screen pixel data without a difference in pixel values using a capture board.
Actually, the two images("Original image data of screen of PC2" and "Image data obtained by OpenCV videocapture() using capture card") don't look much different to the naked eye, but the pixel values ​​of the two are different.
Resolution of monitor of PC2 is FHD(1920*1080). And Specification of my capture card is this
HDMI resolution Maxinput can be 4K#30Hz
Support video format 8/10/12bit Deep color
Video output format YUB,JPEG
Video output resolution Max output can be 1080P#30Hz
If it is not simply a matter of specification of the device, how can i obtain perfectly identical pixel data?
else if it is a matter of specification of the device, what kind of capture card should i buy?

Multi-Channel EXR Files

I am relatively new to computer vision and image processing. I have a single EXR file with 7 channels: 1-3 give me the RGB values, 4-6 give me the surface normals coded as RGB values and 7th channel contains depth information from the camera of the rendered image. I was wondering if there was any way to view these 7 channels separately. For eg. I would like an image showing only the depth values in grayscale. So far I haven't found a multi-channel EXR file viewer that does this for me.
Thanks for the help!

Jpeg, remove chroma, keep only luma without recompressing?

Is it possible to strip down chroma information from a jpeg file without loss on the luma?
Ideally I'd like a smaller file-size, greyscale version of an existing and optimized image.
Assuming the scans are not interleaved, you could update the SOF marker to have one scan and then delete the 2d and 3d scans from the input stream.

Is there a video format where the coloured video is three times the size of its grayscale version?

Is there a video format in which the colored video is three times the size of its grayscale version. Say the grayscale video is of 30MB, is there any video format where it's colored version is 90MB. Because ideally the colored image should be roughly three times the size of its grayscale version. As grayscale contains a single array whereas colored images are made of three arrays.
However when I convert a colored MP4 or AVI videos into their grayscale versions, there is very much of a reduction in memory size. I wanted a video format in which there is at least 50% or more reduction in data.
An uncompressed video stream will indeed have three time the size for color compared to greyscale.
However, video compression typically treats the color component (hue and saturation, chrominance, or whatever is used) and the intensity (luminosity, brightness, or whatever is used) differently. The color component is typically compressed much more strongly, because our eyes are less sensitive to degradation in the quality of the color reproduction.
For example, JPEG compression (it's for photos, not video, but the same applies there) typically has 1/2 or 1/4 the number of samples for chrominance than for brightness. See the description of this on Wikipedia for more details.
Thus, it is normal and expected that there is not a 1:3 ratio in the size of compressed video stream for greyscale vs RGB video.

Resources