The problem is simple:
When I run for example:
print(Process.runSync('ipconfig', []).stdout)
I get the expected output:
Windows IP Configuration
Unknown adapter Helyi kapcsolat:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
......
But when I try to run ffmpeg, I get no output:
print(Process.runSync('ffmpeg', []).stdout)
What I'd expect in this case is to get the help output from ffmpeg.
My guess is, ffmpeg has some kind of "different" way of interacting with stdout than most executables. What could be the problem? I'm totally lost here.
I need to read the stdout eventually to get the output of ffmpeg -list_devices true -f dshow -i dummy, which would list the available DirectShow devices.
FFmpeg outputs useful data to stderr, including the help output and the device list of my original command, ffmpeg -list_devices true -f dshow -i dummy.
Simply use stderr (as well as stdout) to get the output:
print(Process.runSync('ffmpeg', []).stderr)
Tested in python 3.7 <=
import subprocess
s_out = subprocess.run(['ffmpeg', 'command_here'], check=True, capture_output=True).stderr
print(s_out)
"subprocess.CalledProcessError" must be returned if command is invalid.
Related
Environment: Docker, Ubuntu 20.04, OpenCV 3.5.4, FFmpeg 4.2.4
Im currently reading the output of a cv2.VideoCapture session using the CV_FFMPEG backend and successfully writing that back out in real time to a file using cv2.VideoWriter. The reason I am doing this is to drawing bounding boxes on the input and saving it to a new output.
The problem is I am doing this in a headless environment (Docker container). And I’d like to view what's being written to cv2.VideoWriter in realtime.
I know there are ways to pass my display through using XQuartz for example so I could use cv2.imshow. But what I really want to do is write those frames to an RTSP Server. So not only my host can "watch" but also other hosts could watch too.
After the video is released I can easily stream the video to my RTSP Server using this command.
ffmpeg -re -stream_loop -1 -i output.mp4 -c copy -f rtsp rtsp://rtsp_server_host:8554/stream
Is there anyway to pipe the frames as they come in to the above command? Can cv2.VideoWriter itself write frames to an RTSP Server?
Any ideas would be much appreciated! Thank you.
After much searching I finally figured out how to do this with FFmpeg in a subprocess. Hopefully this helps someone else!
def open_ffmpeg_stream_process(self):
args = (
"ffmpeg -re -stream_loop -1 -f rawvideo -pix_fmt "
"rgb24 -s 1920x1080 -i pipe:0 -pix_fmt yuv420p "
"-f rtsp rtsp://rtsp_server:8554/stream"
).split()
return subprocess.Popen(args, stdin=subprocess.PIPE)
def capture_loop():
ffmpeg_process = open_ffmpeg_stream_process()
capture = cv2.VideoCapture(<video/stream>)
while True:
grabbed, frame = capture.read()
if not grabbed:
break
ffmpeg_process.stdin.write(frame.astype(np.uint8).tobytes())
capture.release()
ffmpeg_process.stdin.close()
ffmpeg_process.wait()
I have installed ffmpeg and mjpeg-streamer. The latter reads a .jpg file from /tmp/stream and outputs it via http onto a website, so I can stream whatever is in that folder through a web browser.
I wrote a bash script that continuously captures a frame from the webcam and puts it in /tmp/stream:
while true
do
ffmpeg -f video4linux2 -i /dev/v4l/by-id/usb-Microsoft_Microsoft_LifeCam_VX-5000-video-index0 -vframes 1 /tmp/stream/pic.jpg
done
This works great, but is very slow (~1 fps). In the hopes of speeding it up, I want to use a single ffmpeg command which continuously updates the .jpg at, let's say 10 fps. What I tried was the following:
ffmpeg -f video4linux2 -r 10 -i /dev/v4l/by-id/usb-Microsoft_Microsoft_LifeCam_VX-5000-video-index0 /tmp/stream/pic.jpg
However this - understandably - results in the error message:
[image2 # 0x1f6c0c0] Could not get frame filename number 2 from pattern '/tmp/stream/pic.jpg'
av_interleaved_write_frame(): Input/output error
...because the output pattern is bad for a continuous stream of images.
Is it possible to stream to just one jpg with ffmpeg?
Thanks...
You can use the -update option:
ffmpeg -y -f v4l2 -i /dev/video0 -update 1 -r 1 output.jpg
From the image2 file muxer documentation:
-update number
If number is nonzero, the filename will always be interpreted as just a
filename, not a pattern, and this file will be continuously overwritten
with new images.
It is possible to achieve what I wanted by using:
./mjpg_streamer -i "input_uvc.so -r 1280×1024 -d /dev/video0 -y" -o "output_http.so -p 8080 -w ./www"
...from within the mjpg_streamer's directory. It will do all the nasty work for you by displaying the stream in the browser when using the address:
http://{IP-OF-THE-SERVER}:8080/
It's also light-weight enough to run on a Raspberry Pi.
Here is a good tutorial for setting it up.
Thanks for the help!
How can i get the output file of this FFMPEG code saved to a variable?
def take_screenshot
logger.debug "Trying to grab a screenshot from #{self.file}"
system "ffmpeg -i #{self.file} -ss 00:00:02 -vframes 1 #{Rails.root}/public/uploads/tmp/screenshots/#{File.basename(self.file)}.jpg"
self.save!
end
I have tried:
self.screenshot = system "ffmpeg -i #{self.file} -ss 00:00:02 -vframes 1 #{Rails.root}/public/uploads/tmp/screenshots/#{File.basename(self.file)}.jpg"
but this doesn't save anything.
thanks in advance!
ffmpeg usually outputs nothing on stdout and all of its debug messages on stderr. You can make it output the video (or image) to stdout when you pass - as the output file. You'd then also need to suppress stderr.
system "ffmpeg -i #{self.file} -ss 00:00:02 -c:v mjpeg -f mjpeg -vframes 1 - 2>/dev/null"
This will output the raw data of the JPEG-encoded image to stdout. From there you can save the data to a variable and, for example, transfer it somewhere else.
To get stdout from system calls, see here: Getting output of system() calls in ruby – especially popen3 should help you in that case, where you could discard the stderr from within Ruby.
I'm running a ffmpeg command to try to get the duration of a video file, the command is as follows...
system('ffmpeg -i C:\Users\example\Desktop\video9.mp4 -f ffmetadata')
When I run that line it outputs a lot of info to the rails console, including duration. But how would I capture that info so I can split it and grab the data I need? (I'm doing this inside a rails controller)
When I run something like this...
metadata = system('ffmpeg -i C:\Users\example\Desktop\video9.mp4 -f ffmetadata')
puts metadata
All it returns is false.
Use:
output = `ffmpeg -i C:\\Users\\example\\Desktop\\video9.mp4 -f ffmetadata`
The problem is that system doesn't capture the output of the command being run. Instead, we use %x[...] or its equivalent using backticks, which captures the sub-shell's STDOUT.
If you need more control, look at Open3.capture3.
Found it...
inspect_command = "ffmpeg -i " + file_location + " 2>&1 "
metadata = `#{inspect_command}`
If all you need to get is the video duration use ffprobe instead of ffmpeg. It returns the video metadata directly.
I am trying to run this command here:
f = open("|ffmpeg -i /Users/joaoh82/Desktop/teste.MP4")
result = f.read()
But I am not getting any response...
But when I try this command in the terminal it works great:
ffmpeg -i /Users/joaoh82/Desktop/teste.MP4
But now on rails code. Funny thing is that when I try the same thing with some else like an echo $PATH it works great! Like:
f = open("|echo $PATH")
result = f.read()
Any ideas!?
ffmpeg -i prints to stderr, which won't be captured by your pipe. You could redirect stderr to stdout:
result = `ffmpeg -i /Users/joaoh82/Desktop/teste.MP4 2>&1`
You probably need to specify the full path to ffmpeg. It might be working in your shell because ffmpeg is in your PATH.
Basically in your shell, type which ffmpeg. Use that full path in your open() call.