How to receive images from Jetson TX1 embedded camera? - opencv

I flashed my Jetson TX1 with the latest Jetpack (Linux For Tegra R23.2), and the following command works perfectly:
gst-launch-1.0 nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvtee ! nvvidconv flip-method=2 ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvoverlaysink -e
I tried to use the following python program to receive images from webcam:
source: http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
I got the following error:
OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file /hdd/buildbot/slave_jetson_tx_2/35-O4T-L4T-Jetson-L/opencv/modules/imgproc/src/color.cpp, line 3739
Traceback (most recent call last):
File "webcam.py", line 11, in <module>
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.error: /hdd/buildbot/slave_jetson_tx_2/35-O4T-L4T-Jetson-L/opencv/modules/imgproc/src/color.cpp:3739: error: (-215) scn == 3 || scn == 4 in function cvtColor
I know the problem is that it cannot receive images from webcam. I also changed the code to just show the received image from webcam, but it gives me error that means no image it get from camera.
I also tried to use C++ with the following code:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int argc, char** argv)
{
VideoCapture cap;
// open the default camera, use something different from 0 otherwise;
// Check VideoCapture documentation.
if(!cap.open(0))
return 0;
for(;;)
{
Mat frame;
cap >> frame;
if( frame.empty() ) break; // end of video stream
imshow("this is you, smile! :)", frame);
if( waitKey(1) == 27 ) break; // stop capturing by pressing ESC
}
// the camera will be closed automatically upon exit
// cap.close();
return 0;
}
and it compiled without any errors using
g++ webcam.cpp -o webcam `pkg-config --cflags --libs opencv`
But again, when I'm running the program I receive this error:
$ ./webcam
Unable to stop the stream.: Device or resource busy
Unable to stop the stream.: Bad file descriptor
VIDIOC_STREAMON: Bad file descriptor
Unable to stop the stream.: Bad file descriptor
What I've missed? Is there any command I should run to activate webcam before running this program?

According to the nvidia forums you need to get the gstreamer pipeline correct. And at the moment opencv can not autodetect the stream for the nvcamera.
The only way I got it to work, was with Opencv3 and this line of code to grab the video:
cap = cv2.VideoCapture("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)I420 ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")

Thanks for all the information from this thread, anyway, I just got my python works to get one frame from TX1 camera module.
The important thing to get it work is that we need install OpenCV 3.1.0, you can follow the official build method and replace python cv2.so lib to version 3.1.0. Original l4t OpenCV is 2.4.
And the other important thing is to use correct nvcamerasrc; try
c
ap = cv2.VideoCapture("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1 ! nvtee ! nvvidconv flip-method=2 ! video/x-raw(memory:NVMM), format=(string)I420 ! nvoverlaysink -e ! appsink")
It works on my TX1, just for sharing here.

Related

GStreamer pipeline + OpenCV VideoCapture.read() returns None

I'm trying to get GStreamer + OpenCV RTSP video capture using the following:
vcap = cv2.VideoCapture("""rtspsrc location=rtsp://192.168.100.60:554/stream1 latency=0 ! queue ! rtph264depay
! h264parse ! avdec_h264 ! videoconvert ! appsink""", cv2.CAP_GSTREAMER)
while True:
ret, frame = vcap.read()
print(frame)
cv2.imshow('VIDEO', frame)
cv2.waitKey(1)
However, the frame read by vcap is None:
(<unknown>:79564): GLib-GObject-WARNING **: 00:27:54.660: invalid cast from 'GstQueue' to 'GstBin'
(<unknown>:79564): GStreamer-CRITICAL **: 00:27:54.660: gst_bin_iterate_elements: assertion 'GST_IS_BIN (bin)' failed
(<unknown>:79564): GStreamer-CRITICAL **: 00:27:54.660: gst_iterator_next: assertion 'it != NULL' failed
(<unknown>:79564): GStreamer-CRITICAL **: 00:27:54.660: gst_iterator_free: assertion 'it != NULL' failed
[ WARN:0#0.020] global /tmp/opencv-20220409-60041-xvxfur/opencv-4.5.5/modules/videoio/src/cap_gstreamer.cpp (1226) open OpenCV | GStreamer warning: cannot find appsink in manual pipeline
[ WARN:0#0.020] global /tmp/opencv-20220409-60041-xvxfur/opencv-4.5.5/modules/videoio/src/cap_gstreamer.cpp (862) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
None
Traceback (most recent call last):
File "/Volumes/Data/Projects/rtmp_test/src/test.py", line 21, in <module>
read(1)
File "/Volumes/Data/Projects/rtmp_test/src/test.py", line 18, in read
cv2.imshow('VIDEO', frame)
cv2.error: OpenCV(4.5.5) /tmp/opencv-20220409-60041-xvxfur/opencv-4.5.5/modules/highgui/src/window.cpp:1000: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'imshow'
The stream can be played in VLC perfectly fine and gst-launch-1.0 rtspsrc location=rtsp://192.168.100.60:554/stream1 latency=0 ! queue ! rtph264depay! h264parse ! avdec_h264 ! videoconvert ! appsink gives regular output. Does anyone know what might be wrong?
UPDATE: I've noticed that this problem occurs only on OSX. It works fine on my Ubuntu machine.
You may try specifying caps as video format BGR (or GRAY8 for monochrome) before appsink as this would be the default format expected by most cases from OpenCV (also maybe simplifying quoting) such as:
gst_pipeline='rtspsrc location=rtsp://192.168.100.60:554/stream1 latency=0 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1'
vcap = cv2.VideoCapture(gst_pipeline, cv2.CAP_GSTREAMER)
Also note that printing each frame in terminal in the loop may prevent from running at expected framerate depending on your use case and platform.

cv2.imshow error "The function is not implemented"

I'm trying to connect to my laptop camera as live streaming but it doesn't work. cv2.imshow() provokes an error. I'm using Python 3.6 and OpenCV 4.1.0 on windows 10.
I've tried to rebuild the library GTK+ 2.x but nothing changed.
import cv2
cap = cv2.VideoCapture(0)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
while True:
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Here's the error:
Traceback (most recent call last):
File "C:/Users/hiba/PycharmProjects/Python for CV/12-Connecting_to_Camera.py", line 21, in <module>
cv2.imshow('frame', gray)
cv2.error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\highgui\src\window.cpp:627: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvShowImage'
[ WARN:0] terminating async callback
Process finished with exit code 1
pip install opencv-contrib-python

OpenCV(3.4.3) !_src.empty() in function 'cvtColor' error

I am new to OpenCV and Google Colab. I have been working on a project that requires me to take the real-time image frames from the webcam and process it. But the problem is from the below code the 'frame' always returns a 'None' type and my webcam does not seem to switch on. But using the example code from Colab to capture images works fine:
How to use cap = cv2.VideoCapture(0) in Google Colab
Here is the code that fails:
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
---> 19 frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
error: OpenCV(3.4.3) /io/opencv/modules/imgproc/src/color.cpp:181: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
try replace the first line with
frame = cv2.imread('your_image.png',0)
If it works, then the high chance is your camera issue.
There could be multiple reason. try
sudo apt-get install ffmpeg
sudo apt-get install cheese
cheese
to see if you can get video feed in ubuntu. If can, then its opencv config issue. if cannot then its either driver or hardware issue.
If its driver issue. follow https://help.ubuntu.com/community/Webcam to driver
If hardware broke down, not much you can do software

Gstreamer does not sink to named pipe

I'm getting different behavior when the sink of a gst-launch pipeline is a named pipe vs a normal file.
I have a gst-launch pipeline which displays video from a camera on an OMAP embedded (linux) board and delivers the video as avi via a tee.
gst-launch -v -e omx_camera device=0 do-timestamp=1 mode=0 name=cam cam.src ! "video/x-raw-yuv, format=(fourcc)NV12, width=240, height=320, framerate=30/1" ! tee name=t1 t1. ! queue ! ducatih264enc profile=100 level=50 rate-preset=low-delay bitrate=24000 ! h264parse ! queue ! avimux ! filesink location=/tmp/camerapipe t1. ! queue ! dri2videosink sync=false
If I make
filesink location=/some/real/file t1.
all is well
but I wish to read the output with a Java/opencv process, and when I do this I don't get anything to the java process. The gst-launch process does announc that it's changed to PLAY.
To simplify things instead of the java process I tail -f the named pipe
and also don't see any output, though in both cases the dri2videosink is displaying the video
With either tail or the java process, killing it also stops the gst-launch process, so obviously it's 'connected' in some sense.
Killing the gst-launch process with the tail running gets what looks like a few K, maybe 1 frame of data, after gst-launch exits.
I've tried saving to normal file and reading with the java process, that works, so I know it's not the data format.
I am trying to do the same thing, I am using opencv in c and working in ubuntu though.
I did get the following to work:
I created a named pipe in /dev/ called video_stream using mkfifo make sur eyou have permissions to read/write to/from it or just use sudo.
Play with test video to a named pipe
sudo gst-launch -e videotestsrc ! video/x-raw-yuv, framerate=20/1, width=640, height=480 ! ffenc_mpeg4 ! filesink location=/dev/video_stream
Play from web cam to a named pipe:
sudo gst-launch -e v4l2src device=/dev/video0 ! ffenc_mpeg4 ! filesink location=/dev/video_stream
I then used the face detection tutorial at
http://docs.opencv.org/doc/tutorials/objdetect/cascade_classifier/cascade_classifier.html#cascade-classifier
to test everything, but changed my input from webcam 1 to the namedpipe.
capture = cvCaptureFromCAM( -1 );
Becomes
VideoCapture capture("/dev/video_stream");
This would work, but problem with pipes and files is that closing reader makes gstreamer to stop working. Solution is to use racic's ftee program:
sudo gst-launch -e videotestsrc ! video/x-raw-yuv, framerate=20/1, width=640, height=480 ! ffenc_mpeg4 ! fdsink fd=1 | ./ftee /dev/video_stream > /dev/null 2>&1
This will output stdin of ftee to named pipe with copy to stdout (sent to /dev/null) but ftee ignored errors and closing of destination pipe. To playing from pipe and stopping does not influence gstreamer. Just try and then think about what I wrote. Not the opposite :)
Play from named pipe, anytime you want:
gst-launch filesrc location=/dev/video_stream ! autovideosink
Regarding your use with OpenCV:
VideoCapture capture("/dev/video_stream");
video stream from /dev/video_stream should be mpeg4 but I'm not sure if OpenCV will sense properly the source. You might have to experiment with provider (even gstreamer provider is available when compiled into opencv).
See api prererence when creating Capture:
VideoCapture (const String &filename, int apiPreference)
set apiPreference to proper value. I'd try ffmpeg or gstreamer.
If you want use gstreamer directly, try with appsink as a sink, that is OpenCV. This might be something like
filesrc location=/dev/video_stream ! video/h264 ! appsink
caps with video/h264 is a blind guess as I don't have ffenc_mpeg4 encoder because it's from gst 0.10 but you get the idea.
Good luck.

OpenCV: can't set resolution of video capture

I am using OpenCV 2.4.5 on Ubuntu 12.04 64-bit. I would like to be able to set the resolution of the input from my Logitech C310 webcam. The camera supports up to 1280x960 at 30fps, and I am able to view the video at this resolution in guvcview. But OpenCV always gets the video at only 640x480.
Trying to change the resolution with cap.set(CV_CAP_PROP_FRAME_WIDTH, 1280) and cap.set(CV_CAP_PROP_FRAME_HEIGHT, 960) immediately after the VideoCapture cap is created has no effect; trying to set them immediately before getting every frame causes the program to crash immediately. I cannot reduce the resolution with this method either. I am also getting the error "HIGHGUI ERROR: V4L/V4L2: VIDIOC_S_CROP". I think this may be related, because it appears once when the VideoCapture is created, and once when I try to set the width and height (but, oddly, not if I try to set only one of them).
I know I'm not the first to have this problem, but I have yet to find a solution after much Googling and scouring of SO and elsewhere on the internet (among the many things I've already tried to no avail is the answer to this StackOverflow question: Increasing camera capture resolution in OpenCV). Is this a bug in OpenCV? If so, it's a rather glaring one.
Here's an example of code that exhibits the problem (just a modified version of OpenCV's video display code):
#include <cv.h>
#include <highgui.h>
using namespace cv;
int main(int argc, char** argv)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
cap.set(CV_CAP_PROP_FRAME_WIDTH, 160);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
Mat image;
namedWindow("Video", CV_WINDOW_AUTOSIZE);
while(1)
{
// cap.set(CV_CAP_PROP_FRAME_WIDTH, 160);
// cap.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
cap >> image;
imshow("Video", image);
if(waitKey(10) == 99 ) break;
}
return
}
As it is, that gets me two "HIGHGUI ERROR"s as described above and I get a 640x480 output. I know that 160x120 is a resolution that my camera supports from running v4l2-ctl --list-formats-ext. If I uncomment the two commented-out lines in the while loop, the program crashes immediately.
These might be related or have possible solutions: http://answers.opencv.org/question/11427/decreasing-capture-resolution-of-webcam/, http://answers.opencv.org/question/30062/error-setting-resolution-of-video-capture-device/
This is a bug in the v4l "version" (build) of OpenCV 2.4 (including 2.4.12), but the bug is not in the libv4l version. For OpenCV 3.1.0, neither the v4l nor the libv4l version has the bug.
(Your error error message HIGHGUI ERROR: V4L/V4L2: VIDIOC_S_CROP indicates that you have the v4l version; the message is in cap_v4l.cpp, see code, but not in cap_libv4l.cpp.)
A workaround to get the v4l version of OpenCV 2.4 to work at a fixed resolution other than 640x480 is
changing the values for DEFAULT_V4L_WIDTH and DEFAULT_V4L_HEIGHT in
modules/highgui/src/cap_v4l.cpp and re-building OpenCV, kudos to this
answer.
If you want to build the libv4l version instead, all you likely need to do is
install libv4l-dev and rebuild OpenCV; WITH_LIBV4L was enabled by default for me. If it is not, your cmake command should contain
-D WITH_LIBV4L=ON
The cmake output (or version_string.tmp) for a libv4l build contains something like
Video I/O:
...
V4L/V4L2: Using libv4l1 (ver 0.8.6) / libv4l2 (ver 0.8.6)
(For a v4l build, it is just V4L/V4L2: NO/YES.)
Just wanted to add my CMAKE options to build with Java on the Raspberry Pi 3 based on Ulrich's comprehensive answer for OpenCV 3.2.0. Make a /build folder a in the same folder as OpenCV CMakeList.txt and execute this script for the new /build folder:
sudo cmake -D CMAKE_BUILD_TYPE=RELEASE -D WITH_OPENCL=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_SHARED_LIBS=OFF -D JAVA_INCLUDE_PATH=$JAVA_HOME/include -D JAVA_AWT_LIBRARY=$JAVA_HOME/jre/lib/arm/libawt.so -D JAVA_JVM_LIBRARY=$JAVA_HOME/jre/lib/arm/server/libjvm.so -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_TESTS=OFF -D WITH_MATLAB=OFF -D WITH_CUFFT=OFF -D WITH_CUDA=OFF -D WITH_CUBLAS=OFF -D WITH_GTK=OFF -D WITH_WEBP=OFF -D BUILD_opencv_apps=OFF -D BUILD_PACKAGE=OFF -D WITH_LIBV4L=ON ..
You can use v4l2-ctl to set frame size of captured video like below.
v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=1
You can find more information at this link
Maybe you can try this, but I am not sure if this is what you want:
#include <X11/Xlib.h>
Display* disp = XOpenDisplay(NULL);
Screen* scrn = DefaultScreenOfDisplay(disp);
int height = scrn->height;
int width = scrn->width;
//Create window for the ip cam video
cv::namedWindow("Front", CV_WINDOW_NORMAL);
cvSetWindowProperty( "Front", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN );
//Position of the screen where the video is shows
cvMoveWindow("Front", 0, 0);
cvResizeWindow( "Front", width, height );
Like this you get the full screen for any screen.

Resources