Use Gstreamer to write jpeg encoded data as video using appsrc - opencv

I have a python script that receives jpeg encoded data using following pipeline and sends the jpeg data on a port.
rtspsrc location=rtsp://192.168.100.40:8554/ latency=0 retry=50 ! rtph265depay ! h265parse ! avdec_h265 ! videoscale ! videorate ! video/x-raw, framerate=10/1, width=1920, height=1080 ! jpegenc quality=85 ! image/jpeg ! appsink
At the receiver end I want to save the incoming data as a video, as described in this link
https://gstreamer.freedesktop.org/documentation/jpeg/jpegenc.html
gst-launch-1.0 videotestsrc num-buffers=50 ! video/x-raw, framerate='(fraction)'5/1 ! jpegenc ! avimux ! filesink location=mjpeg.avi
I have tried using opencv's VideoWriter with CAP_GSTREAMER
pipeline = f'appsrc ! avimux ! filesink location=recording.avi'
cap_write = cv2.VideoWriter(pipeline,cv2.CAP_GSTREAMER,0, 1, (1920,1080), False)
...
cap_write.write(jpgdata)
but it gives a runtime warning
[ WARN:0] global ../opencv/modules/videoio/src/cap_gstreamer.cpp (1948) writeFrame OpenCV | GStreamer warning: Error pushing buffer to GStreamer pipeline
If I modify the pipeline and use
pipeline = f'appsrc ! videoconvert ! videorate ! video/x-raw, framerate=1/1 ! filesink location=recording.avi'
The code does save the incoming frames as video but the saved video is too big with no bitrate and duration information in it.
ffmpeg -i recording.avi
...
[mjpeg # 0x560f4a408600] Format mjpeg detected only with low score of 25, misdetection possible!
Input #0, mjpeg, from 'recording.avi':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: mjpeg (Baseline), yuvj420p(pc, bt470bg/unknown/unknown), 1920x1080 [SAR 1:1 DAR 16:9], 25 tbr, 1200k tbn, 25 tbc
At least one output file must be specified
Need help!

Related

Low FPS NVIDIA JETSON NANO Opencv

I use some python code to decode H264 Video on NVIDIA JETSON NANO everything works fine.
I see some times that I get this warning:
[ WARN:0] global /home/ubuntu/opencv/modules/videoio/src/cap_gstreamer.cpp (961) open
OpenCV | GStreamer warning: Cannot query video position: status=1, value=253, duration=-1
I do not rely sure that this message is the problem but,
When I think to get this warning I get the performance of 7~7.5FPS, sometimes I don't get this warning, and my performance increase to 10.5FPS.
I would be happy to understand the problem I play some pcap video using colasoft player and capture the video in jetson using this python script.
class Video:
def __init__(self, urlName='udp://127.0.0.1:46002'):
print(f"Initialize Vieo, url: {urlName}")
self.pipeline = 'udpsrc port=46002 multicast-group=234.0.0.0 ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1'
# self.cap = cv2.VideoCapture(urlName, cv2.CAP_FFMPEG)
self.cap = cv2.VideoCapture(self.pipeline, cv2.CAP_GSTREAMER)
if not self.cap.isOpened():
print('VideoCapture not opened')
exit(-1)
def readVideo(self):
"""
This Function read h.264 stream and calculate image info
:return: image, data_image
"""
try:
ret, frame = self.cap.read()
while frame == None:
ret, frame = self.cap.read()
except Exception as e:
pass
shape = frame.shape
data = {"image_width": shape[1], "image_height": shape[0], "channels": shape[2], "size_in_bytes": shape[0] * shape[1] * shape[2], "bits_per_pixel": shape[2] * 8} # data Video
return frame, data

Gstreamer Extract Audio From m3u8 Video Stream

I am trying to extract the audio from a HLS m3u8 live video stream into either FLAC, PCM, or OGG-OPUS. I've tried several options. They all create the file on my desktop but when I play the file, no audio is recorded.
Convert to FLAC
gst-launch-1.0 souphttpsrc location=[m3u8 URL] ! hlsdemux ! decodebin ! audioconvert ! flacenc ! filesink location=audio.flac
Convert to PCM
gst-launch-1.0 souphttpsrc location=[m3u8 URL] ! hlsdemux ! decodebin ! audioconvert ! audio/x-raw,format=S32BE,channels=1,rate=48000 ! filesink location=audio.pcm
Convert to OGG-OPUS
gst-launch-1.0 souphttpsrc location=[m3u8 URL] ! hlsdemux ! decodebin ! audioconvert ! opusenc ! filesink location=audio.ogg
Information about the m3u8 HLS stream from gst-discoverer-1.0
Properties:
Duration: 99:99:99.999999999
Seekable: no
Live: no
container: application/x-hls
container: MPEG-2 Transport Stream
audio: MPEG-4 AAC
Stream ID: 722f60699ac437d8b42b2325b9497eb8707874802bf34a0185ca68ebfd95dd38/src_0:1/00000101
Language: <unknown>
Channels: 2 (front-left, front-right)
Sample rate: 48000
Depth: 32
Bitrate: 0
Max bitrate: 0
video: H.264 (Main Profile)
Stream ID: 722f60699ac437d8b42b2325b9497eb8707874802bf34a0185ca68ebfd95dd38/src_0:1/00000100
Width: 768
Height: 432
Depth: 24
Frame rate: 30000/1001
Pixel aspect ratio: 1/1
Interlaced: false
Bitrate: 0
Max bitrate: 0
I figured this out and posted the solution on GitHub:
GStreamer Example on GitHub
This Node.js example will take a live m3u8 stream, use GStreamer to extract the audio, save it to a FLAC audio file, and send to AWS Transcribe all in real-time. (Some code copied from other examples on-line and combined into one working example.)

Can I store Gstreamer output to buffer using opencv . If I add appsink in pipline

Hello I am very new to gstreamer. As of now I am encoding frames using gstreamer pipeline in opencv in c++. But now I do not want dump each frame. I want to encode all frames and store it in a large buffer and whenever I want I will dump that buffer. so how can I do appsink in gstreamer opencv.
Below my code snippet where i am encoding each frame and dumping also
cv::VideoWriter out("appsrc ! videoconvert ! video/x-raw,width=1280,height=720 ! v4l2h264enc ! avimux ! filesink location=hellotest.avi",cv::CAP_GSTREAMER,0,30,cv::Size(1280,720),true);
out.write(Frame);
But I want appsink to store all encoded data into buffer I do not want to dump it or write it.
The short answer is no.
First of all you cannot write and read at the sametime with VideoWriter. What you can do is you can create two pipelines that communicate with each other using ipcpipeline elements.
For example
cv::VideoWriter out("appsrc ! videoconvert ! video/x-raw,width=1280,height=720 ! v4l2h264enc ! ipcpipelinesink",...);
cv::VideoCapture cap("ipcpipelinesrc ! ... ! appsink")
But this won't provide you with many alternatives because the data formats supported with VideoCapture is very limited, if you check the souce code.
// we support 11 types of data:
// video/x-raw, format=BGR -> 8bit, 3 channels
// video/x-raw, format=GRAY8 -> 8bit, 1 channel
// video/x-raw, format=UYVY -> 8bit, 2 channel
// video/x-raw, format=YUY2 -> 8bit, 2 channel
// video/x-raw, format=YVYU -> 8bit, 2 channel
// video/x-raw, format=NV12 -> 8bit, 1 channel (height is 1.5x larger than true height)
// video/x-raw, format=NV21 -> 8bit, 1 channel (height is 1.5x larger than true height)
// video/x-raw, format=YV12 -> 8bit, 1 channel (height is 1.5x larger than true height)
// video/x-raw, format=I420 -> 8bit, 1 channel (height is 1.5x larger than true height)
// video/x-bayer -> 8bit, 1 channel
// image/jpeg -> 8bit, mjpeg: buffer_size x 1 x 1
// bayer data is never decoded, the user is responsible for that
// everything is 8 bit, so we just test the caps for bit depth

Read encoded frames from webcam with python opencv

At this moment I have the VideoImageTrack class I show bellow (adapted from here) that returns an av VideoFrame. The script I show works fine. My problem is that the frame encoding step:
VideoFrame.from_ndarray(image, format="bgr24")
is quite slow.
Is there a gstreamer pipeline that outputs the already encoded frame and iterable with python-opencv read()?
class VideoImageTrack(VideoStreamTrack):
"""
A video stream track that returns a rotating image.
"""
def __init__(self):
super().__init__() # don't forget this!
self.video = cv2.VideoCapture("v4l2src device=/dev/video0 ! video/x-raw,width=640,height=480,framerate=15/1,bitrate=250000 ! appsink")
async def recv(self):
pts, time_base = await self.next_timestamp()
retval, image = self.video.read()
frame = VideoFrame.from_ndarray(image, format="bgr24")
frame.pts = pts
frame.time_base = time_base
return frame
Although I don't know if processing will be faster but you can try to use GStreamer's videoconvert that transcodes video frames between many formats.
Example pipeline:
v4l2src device=/dev/video0 ! videoconvert ! video/x-raw,format=BGR,width=640,height=480,framerate=15/1 ! appsink

cvSaveImage not able to save image on changing -prefix to "a" in examples/detector.c

I want to save the image on a running video for the command ./darknet detector demo cfg/coco.data cfg/yolo.cfg yolo.weights <video_file>. So I changed the value of -prefix parameter to "a" in examples/detector.c to save image instead of streaming the video
Loading weights from yolo.weights...Done!
video file: /home/ubuntu/VID_20170602_164011.3gp
save_image_jpg(): a_00000000.jpg
FPS:0.0
Objects:
save_image_jpg(): a_00000001.jpg
FPS:0.0
Objects:
refrigerator: 26%
person: 40%
save_image_jpg(): a_00000002.jpg
FPS:0.0
Objects:
refrigerator: 36%
person: 88%
refrigerator: 26%
save_image_jpg(): a_00000003.jpg
But the jpg images are not saved in my system. I am using ubuntu as OS. Help is appriciated
The code changing the value from prefix 0 to "a" actually works. Just some file permissions was supposed to be given. Also you can change the image directory path in function save_image_jpg
sprintf(buff, "your_img_path/%s.jpg", name);
Later run the following command to convert the images to Video
ffmpeg -framerate 25 -i your_img_path/a_%08d.jpg -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p video_name.mp4

Resources