OpenCV output on V4l2 - opencv

I wanted to know if I can use "opencv" to write on a v4l2 device.
I would take a picture, apply small changes with the features of opencv, and then send it on a v4l2 device.
I searched on the web, but there are a lot of examples on how to read from a V4L2 device, but I found nothing about writing on v4l2.
can someone help me?

The question is 8 month old, but if you still need an answer (I suppose your OS is Linux):
Install v4l2 loopback module
1.1. Load and configure it linux: i.e. modprobe.conf: options v4l2loopback video_nr=22,23
Use such C++/OpenCV code: gist
2.1. Setup device using ioctl() call
2.2. Write raw RGB data to this device (i.e. /dev/video23)
2.3. Use it as regular v4l2 device (i.e. webcam or vlc v4l2:///dev/video23)
more: You can use ffmpeg with v4l2 loopback:
ffmpeg -f x11grab -r 12 -s 1920x1080 -i :0.0+0,0 -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f v4l2 -vf 'scale=800:600' /dev/video22

Related

How to create a gif from an image sequence without dither with FFMPEG?

I'm able to create a gif from the image sequence, but I'm struggling to remove the dither from it.
This is the command I'm using to create the gif:
ffmpeg -f image2 -framerate 24 -y -i image_%d.png -loop -1 "C:\Users\agetr\Documents\_tmp\__giuf.gif"
And I've tried to use the paletteuse=dither=none filter in different ways with no luck.
P.S.: I'm very new to the ffmpeg cli
You need to use -sws_dither none (after the -i $file argument, and before the output file). I've tried this on current git/master of FFmpeg and it works as expected, but on older builds (e.g. 4.4.1) this doesn't work. I don't know why exactly, so use a recent (5.0 or any version from 2022, if possible) version/build.

Extracting frames from a video created by muxing a bunch of images together is changing the integrity of the frames

I am trying to transfer a bunch of images as a video. What I am doing is simply muxing these static images into a video using the following commands.
ffmpeg -framerate 1 -i %02d.jpg -codec copy 1.mkv
After this, I verify the integrity of my static images and the frames in the video using -
ffmpeg -i %02d.jpg -f framehash -
and
ffmpeg -i 1.mkv -map 0:v -f framehash -
I get the same hashes so it means I have archived the images properly. Now next I send this video to my friend who extracts the frames using -
ffmpeg -i 1.mkv mkv%02d.jpg
Now after extracting the hashes don't remain the same that means the integrity is lost.
How to extract the frames as it is so that integrity is not lost?
Also if you any other way to achieve what I am trying, please advice.
Here are the hashes.
The command
ffmpeg -i 1.mkv mkv%02d.jpg
doesn't "extract" the JPEGs. It decodes the stored frames and re-encodes them using the MJPEG encoder.
Use
ffmpeg -i 1.mkv -c copy mkv%02d.jpg

Http Live Streaming without encoding

I have MPEG-TS H.264 video stream of Live TV Channel, I want to Live Stream it for iPhone but as HLS requires to make segments (e.g.: 10s) segment and send it using M3u8. and for this purpose I am currently using ffmpeg and m3u8-segmenter available on internet. But I do not want to use transcoding using ffmpeg as i have memory + processor limitations on my hardware. Is it possible that i can only make segments of the MPEG-TS video and directly send it to the iPhone.
I have tried many ways but unable to do so. I am using Linux based system.
Please help me, what are the procedures of live streaming to iphone without transcoding the video.
Thanks
The best way to go about this is to cut out FFmpeg entirely. Although you can coerce FFmpeg to not transcode (by using -c copy), since the video is already in MPEG-TS format straight off of the livestream, it's best to use it directly.
Since it looks like the video is coming over HTTP, you can use curl to print it to stdout:
curl http://localhost:6954/myvideo.ts | ./m3u8-segmenter -i - -d 10 -p outputdir/prefix -m outputdir/output.m3u8 -u http://domain.com
Or if you want to use wget instead of curl, it's similar
wget -O - http://localhost:6954/myvideo.ts | ./m3u8-segmenter -i - -d 10 -p outputdir/prefix -m outputdir/output.m3u8 -u http://domain.com
Either wget or curl will likely already be installed on your system.

ffmpeg open remote video with high latency while gstreamer not

I use MJPEG-Streamer to send my remote camera video via wifi. I use such command to see video:
gstreamer:
gst-launch -v souphttpsrc location= "http://192.168.1.1:8080/?action=stream&type=.mjpg" do-timestamp=true is_live=true ! multipartdemux ! jpegdec ! autovideosink
ffmpeg:
ffplay "http://192.168.1.1:8080/?action=stream&type=.mjpg"
or:
ffplay "http://192.168.1.1:8080/?action=stream&type=.mjpg" -fflags nobuffer
however the ffplay has a high latency up to 3~10 seconds in my test, while the gstreamer
show almost no latency.
when using localhost MJPEG-Streamer, both two methods show low latency. so what's the reason ? and how to decrease the latency?
more detail:
I want to use opencv to capture a remote camera,my opencv compile with ffmpeg support but without gstreamer support ( I tried but failed,the cmake seemed to not find my gstreamer,I don't know which gstreamer library to install in opensuse 13.1), I can get the video in opencv but with high latency,so I compared ffmpeg with gstreamer,the result is as above.so how to decrease the latency? I read this link,but still no solution.
thank you.

VLC recording rtsp stream

I have a problem with recording rtsp stream with VLC player. Actually my method works in MacOS X, but doesn't in Windows. Command line:
vlc -vvv rtsp://admin:admin#192.168.0.151/live/h264/ --sout="#transcode{vcodec=mp4v,vfilter=canvas{width=800,height=600}}:std{access=file,mux=mp4,dst=C:\123.mp4}"
On MacOS it works fine, but under Windows it creates unreadable file. MediaInfo output:
General
Complete name : C:\123.mp4
Format : MPEG-4
Format profile : Base Media
Codec ID : isom
File size : 1.08 MiB
Any suggestions?
Seems like your destination URL is not correct. Try this:
vlc -vvv rtsp://admin:admin#192.168.0.151/live/h264/ --sout="#transcode{vcodec=mp4v,vfilter=canvas{width=800,height=600}}:std{access=file,mux=mp4,dst=C:\\123.mp4}"
For Linux users, ffmpeg alone works straight away.
If you want to watch the stream while recording, write to a .mkv instead of a .mp4.
This example will overwrite the file video.mp4 in your home folder without asking due to the -y param, and the original codecs are kept.
ffmpeg -i rtsp://192.168.42.1/live -vcodec copy -acodec copy -y ~/video.mp4
NB: This example url is for an Ambarella Xiaomi Mijia 4K camera, like many wifi ip cameras, you have to activate the stream via telnet first, for this particular model the command to be sent before reading the stream via rtsp://:
echo '{"msg_id":257,"token":0}' | telnet 192.168.42.1 7878

Resources