I have a video of several cv::Mat RGB frames that I need to store as videos with lossless encoding on Jetson Xavier NX. In this process, I need to convert the RGB frames to YUV444 and then use Jetson's inbuilt lossless encoding tools to encode to h264/h265 and then save in an mp4 file format.
I can read cv::Mat frame and then convert it to the YUV444 pixel format (either manually element-wise or using OpenCV's inbuilt converter). Currently, I am writing the YUV444 frames to a binary file per frame using std::ofstream(filename, std::ios::binary). However, when I encode the yuv file to h264 or h265, I get very bad results. Is there something I am missing?
Is there a better method to save a video into a yuv file (such that I can use the encoder on the entire video).
-->> Original image in yuv444 space
-->> Image in the encoded video
Related
I have data that I saved as mp4 files and contain intensity for a fluorescent signal, however because they were exported in mp4 format the intensity shown in the image is less than in the original data.
Is there any simple way to maybe multiply a factor to the mp4 data to obtain an approximation of the original counts? Thanks!!
Here is the link (https://imgplay.zendesk.com/hc/en-us/articles/360029411991-What-is-GIF-Dithering-Option-) where it says When you save the file as GIF with dithering, it can make your GIF more natural.
How to implement Dithering for creating more natural GIF from UIImages or video frames using Objective-C or Swift?
Assuming your source image is 8-bit per channel RGB, you could use vImage. vImage doesn't have a vImageConvert_RGB88toIndexed8, but you can split your interleaved image into three 8-bit planar buffers for RGB. I don't know exactly how well this would work, but you could convert two of the three channels to Indexed2 with vImageConvert_Planar8toIndexed2 and the other channel to Indexed4 with vImageConvert_Planar8toIndexed4. That would give you the required 8-bit lookup table.
Apple have loads of Accelerate sample code projects here. Optimising Image-Processing Performance discusses converting interleaved images to planar format. If you have a known palette, Applying Color Transforms to Images with a Multidimensional Lookup Table may be a solution to quantising your image to 256 colors.
For a project I'm currently working on, I'm trying to convert a bunch of PNG images to HEIF/HEIC. These images will be used in Xcode's .xcassets, which will then be "compiled" into a .car file.
Compiling the PNGs (~150 total files) results in ~40 MB of Assets.car, which is why I'm trying to convert them to HEIF/HEIC in the first place. I've tried various solutions, such as ImageMagick, "Export as" in GIMP, biodranik/HEIF, libheif's heif-enc, exporting a PNG as 8-bit or 16-bit in Photoshop and doing everything all over again. But everything results in the .heic file being "broken" on iOS. The first image shows the best output I've got so far, but still fringes around the edges. The white rounded rectangle on the right is iOS' Face ID padlock.
The second image is (I think) a 16-bit PNG converted to HEIC using libheif#1.8.0, upgraded through Homebrew. Lossless quality preset, 10-bit output. heif-enc complained about the color space being converted from RGB to YCbCr, stating even though you specified lossless compression, there will be differences because of the color conversion
Is there any way to properly convert PNG files to HEIF/HEIC without such quality loss? Please don't suggest online services to convert files, as I'd like to keep total control of my files.
Note: To get lossless encoding, you need this set of options. Try :-
-L switch encoder to lossless mode
-p chroma=444 switch off color subsampling
--matrix_coefficients=0 encode in RGB color-space
I am writing code to convert a frame in a MP4 file to a OpenGLES texture, and am using the class AVAssetReaderTrackOutput to be able to access the pixel buffer. What is the best pixel buffer format to output as? Right now I am using my old code that converts YUV420P to RGB in a OpenGLES shader as I previously used libav to feed it. Now I am trying to use AVFoundation and wondering whether my OpenGLES shader is faster than setting the pixel buffer format to RGBA, or whether I should use a YUV format and keep with my shader.
Thanks
I guess this depends on what the destination of your data is. If all you are after is passing through the data, native YUV should be faster than BGRA. If you need to read back the data to RGBA or BGRA, I'd stick to BGRA and use a OpenGL Texture Cache rather than glReadPixels().
I recommend reading the answer for this SO question on the YUV method. Quote:
"Video frames need to go to the GPU in any case: using YCbCr saves you 25% bus bandwidth if your video has 4:2:0 sampled chrominance."
I'm capturing output from iPad camera and then sending it to a server. I'm new to image processing etc. so I'm learning a lot as I go.
The server needs to receive in YUV420P. Right now all I get is the frame in grayscale pretty much. I'm guessing that's the bi-planar vs planar.
How can I convert data from the camera using the kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange data type into YUV420P frames?