How to read AVI video file on colab with opencv? - opencv

I just want to read AVI video on colab. And I have read MPEG video succeed with opencv on colab. Why I can't read AVI video on colab with opencv? My code just like this:
import cv2
path = '/content/video.avi'
cap = cv2.VideoCapture(path)
flag, frame = cap.read()
I am sure the file exists. Is this problem maybe releated with ffmpeg?

Related

Big issue reading a large 16bit grayscale PNG using Python

I have a big issue trying to convert a large scientific image (7 Mb) 16bit PNG image to JPG in order to compress it and check for any eventual artifact in Python.
The original image can be found at:
https://postimg.cc/p5PQG8ry
Reading other answers here I have tried Pillow and OpenCV without any success the only thing I obtain is a white sheet. What I'm doing wrong?
The commented line was an attempt from Read 16-bit PNG image file using Python but seems not working for me generating a data type error.
import numpy as np
from PIL import Image
import cv2
image = cv2.imread('terzafoto.png', -cv2.IMREAD_ANYDEPTH)
cv2.imwrite('terza.jpg', image)
im = Image.open('terzafoto.png').convert('RGB')
im.save('terzafoto.jpg', format='JPEG', quality=100)
#im = Image.fromarray(np.array(Image.open('terzafoto.jpg')).astype("uint16")).convert('RGB')
Thanks to Dan Masek I was able to find the error in my code.
I was not correctly converting the data from 16 to 8 bit.
Here the updated code with the solution for OpenCV and Pillow.
import numpy as np
from PIL import Image
import cv2
im = Image.fromarray((np.array(Image.open('terzafoto.png'))//256).astype("uint8")).convert('RGB')
im.save('PIL100.jpg', format='JPEG', quality=100)
img = cv2.imread('terzafoto.png', cv2.IMREAD_ANYDEPTH)
cv2.imwrite('100.jpeg', np.uint8(img // 256),[int(cv2.IMWRITE_JPEG_QUALITY), 100])
The image quality factor can be set in function of your needs. 100 means lossless compression.

How to let FFMPEG fetch frames from OpenCV and stream them to HTTP server

There is a camera that shoots at 20 frame per second. each frame is 4000x3000 pixel.
The frames are sent to a software that contain openCV in it. OpenCV resizes the freames to 1920x1080 then they must be sent to FFMPEG to be encoded to H264 or H265 using Nvidia Nvenc.
The encoded video then got steamed HTTP to a maximum of 10 devices.
The infrastructure is crazy good (10 GB Lan) with state of the art switchers, routers etc...
Right now, i can get 90 FPS when encoding the images from an Nvme SSD. this means that the required encoding speed is achieved.
The question is how to get the images from OpenCV to FFMPEG ?
the stream will be watched on a webapp that was made using MERN stack (assuming that this is relevant).
For cv::Mat you have cv::VideoWriter. If you wish to use FFMpeg, assuming Mat is continuous, which can be enforced:
if (! mat.isContinuous())
{
mat = mat.clone();
}
you can simply feed mat.data into sws_scale
sws_scale(videoSampler, mat.data, stride, 0, mat.rows, videoFrame->data, videoFrame->linesize);
or directly into AVFrame
For cv::cuda::GpuMat, VideoWriter implementation is not available, but you can use NVIDIA Video Codec SDK and similarly feed cv::cuda::GpuMat::data into NvEncoderCuda, just make sure your GpuMat has 4 channels (BGRA):
NV_ENC_BUFFER_FORMAT eFormat = NV_ENC_BUFFER_FORMAT_ABGR;
std::unique_ptr<NvEncoderCuda> pEnc(new NvEncoderCuda(cuContext, nWidth, nHeight, eFormat));
...
cv::cuda::cvtColor(srcIn, srcIn, cv::ColorConversionCodes::COLOR_BG2BGRA);
NvEncoderCuda::CopyToDeviceFrame(cuContext, srcIn.data, 0, (CUdeviceptr)encoderInputFrame->inputPtr,
(int)encoderInputFrame->pitch,
pEnc->GetEncodeWidth(),
pEnc->GetEncodeHeight(),
CU_MEMORYTYPE_HOST,
encoderInputFrame->bufferFormat,
encoderInputFrame->chromaOffsets,
encoderInputFrame->numChromaPlanes);
Here's my complete sample of using GpuMat with NVIDIA Video Codec SDK

mp4 codec in Raspberry Pi 4: not writing frames to video

I'm not able to write an mp4 video file with cv2 on Rpi4.
All I'm getting in feedback is VIDIOC_DQBUF: Invalid argument
writer = cv2.VideoWriter('test.mp4', cv2.VideoWriter_fourcc(*'mp4v'), fps, (640, 480), True)
stream = cv2.VideoCapture(0)
ret, frame = stream.read()
while ret:
writer.write(frame)
cv2.imshow('Video', frame)
ret, frame = stream.read()
if cv2.waitKey(1) & 0xFF==27:
break
stream.release()
writer.release()
cv2.destroyAllWindows()
The video is displaying using cv2.imshow(frame), and the file is outputted, however no frames are actually written to it, so the video file appears corrupted.
I am assuming this is a codec error. I've tried displaying the codecs using fourcc=-1 in VideoWriter() though the other fourcc's I've tried didn't work either. Has anyone had success using opencv writing videos on rpi4?
I've tested your code and it worked well on my Raspberry Pi 4. I'm using the latest Raspberry Pi OS and OpenCV 4.3.0. I can also use avi codec:
out = cv2.VideoWriter('output.avi', cv2.VideoWriter_fourcc(*'XVID'), 30.0, (640,480))
If you cannot use both of them, try to make some updates for your rpi4.

How to save opencv Mat matrix to a file that can be loaded in Matlab

I want to check the details of some Mat matrices in my OpenCV Codes (within Qt). An easy way, to my knowledge, to check the data matrix is to load it in Matlab. So, I want to save these data into a file that can be loaded in Matlab. Anyone has the experience to do so? A concrete example will be greatly helpful!!
OpenCV provides a straightforward example here using imwrite. And Matlab can then open the jpg files with imread.
The opencv Mat file can be saved to a csv file using cv::format() (writeCSV), which can be read in Matlab using csvread.m.

How to detect artifacts in video?

I'm using OpenCV to handle videos in mp4 format. The image below is a random frame extracted from a video, and you can see the obvious distortion on the sweater.
How can we detect such artifacts? Or can we avoid such artifacts by extracting nearby keyframes and how?
As #VC.One suggested, these distortions are due to video interlacing. Here is a good article about interlacing/deinterlacing: What is Deinterlacing? Facts, solutions, examples.
There are several tools to handle deinterlacing:
[Windows] The one suggested in 100fps.com: Virtualdub + DivX codec + AviSynth
[Windows] MediaCoder suggested by #VC.One.
[Windows/Linux] FFmpeg provides serveral deinterlacing filters, e.g. yadif, kerndeint etc. Here is an example: ffmpeg -i input.mp4 -vf yadif output.mp4

Resources