I am using nvidia jetson nano with rpi camera to run yolov3, i'm 100% sure that the camera is compatible and working perfectly.
when I tried to run live demo using this command
./darknet detector demo data/yolo.data cfg/yolov3_custom_train.cfg
yolov3_custom_train_3000.weights -c 0
CUDA-version: 10000 (10000), cuDNN: 7.6.3, GPU count: 1 OpenCV
version: 4.3.0 Demo net.optimized_memory = 0
i get the following warnings
[ WARN:0] global
/home/jn/opencv_build/opencv/modules/videoio/src/cap_gstreamer.cpp
(1759) handleMessage OpenCV | GStreamer warning: Embedded video
playback halted; module v4l2src0 reported: Internal data stream error.
[ WARN:0] global
/home/jn/opencv_build/opencv/modules/videoio/src/cap_gstreamer.cpp
(888) open OpenCV | GStreamer warning: unable to start pipeline [
WARN:0] global
/home/jn/opencv_build/opencv/modules/videoio/src/cap_gstreamer.cpp
(480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer:
pipeline have not been created
then the demo window pops up and i get a constant green screen
is there any recommended solution?
the green screen i get
Related
I have been trying to make an inference using the Yolov4 Darknet model I trained, and whenever I try to run the Command in Powershell, all It does is print out Cuda Version and OpenCv Version. If anybody has experienced this or knows a solution that would be amazing.
You will have to specify the input file/video as the last cmd line argument to perform inference on that particular file/video.
For example:
input image: darknet.exe detector test cfg/coco.data yolov4.cfg yolov4.weights -ext_output dog.jpg
video: darknet.exe detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights -ext_output test.mp4
WebCam 0: darknet.exe detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights -c 0
I am new to OpenCV and Google Colab. I have been working on a project that requires me to take the real-time image frames from the webcam and process it. But the problem is from the below code the 'frame' always returns a 'None' type and my webcam does not seem to switch on. But using the example code from Colab to capture images works fine:
How to use cap = cv2.VideoCapture(0) in Google Colab
Here is the code that fails:
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
---> 19 frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
error: OpenCV(3.4.3) /io/opencv/modules/imgproc/src/color.cpp:181: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
try replace the first line with
frame = cv2.imread('your_image.png',0)
If it works, then the high chance is your camera issue.
There could be multiple reason. try
sudo apt-get install ffmpeg
sudo apt-get install cheese
cheese
to see if you can get video feed in ubuntu. If can, then its opencv config issue. if cannot then its either driver or hardware issue.
If its driver issue. follow https://help.ubuntu.com/community/Webcam to driver
If hardware broke down, not much you can do software
I'm trying to use opencv via aruco to read an UDP-Stream via Network on Ubuntu 14.04 LTS using OpenCV 3.1.0 and Gstreamer 1.2.4.
I changed the code of the "aruco_simple.cpp" Example file to accomplish that, by changing the parameter of the VideoCapturer constructor to the GStreamer Pipeline:
string PIPELINE_DEF = "udpsrc uri=udp://192.168.71.50:49152 do-timestamp=true name=src blocksize=1316 closefd=false buffer-size=100 !" \
"tsdemux !" \
"queue !" \
"avdec_h264 max-threads=0 !" \
"videoconvert !" \
"xvimagesink name=opencvsink"
//"appsink !"
;
aruco::CameraParameters CamParam;
// read the input image
cv::Mat InImage;
// Open input and read image
//VideoCapture vreader(argv[1]);
VideoCapture vreader(PIPELINE_DEF);
Executing this I always get the following Error:
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/osboxes/Aruco/opencv-3.1.0/modules/videoio/src/cap_gstreamer.cpp, line 834
Exception :/home/osboxes/Aruco/opencv-3.1.0/modules/videoio/src/cap_gstreamer.cpp:834: error: (-2) GStreamer: unable to start pipeline
in function cvCaptureFromCAM_GStreamer
I found this Bug here http://code.opencv.org/issues/3953
But the solution does not help me in my case.
If I start a GStreamer Pipeline directly (without aruco and opencv) in Python it works.
GStreamer was found by opencv according to the cmake output:
-- GStreamer:
-- base: YES (ver 1.2.4)
-- video: YES (ver 1.2.4)
-- app: YES (ver 1.2.4)
-- riff: YES (ver 1.2.4)
-- pbutils: YES (ver 1.2.4)
My system is ubuntu 15.10. I am very sure my audio works,
arecord -l
**** List of CAPTURE Hardware Devices ****
card 1: PCH [HDA Intel PCH], device 0: ALC887-VD Analog [ALC887-VD Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: PCH [HDA Intel PCH], device 2: ALC887-VD Alt Analog [ALC887-VD Alt Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
but pa_devs, which is a official provided execuatble file in portaudio, reports 0 device as below,
PortAudio version number = 1899
PortAudio version text = 'PortAudio V19-devel (built Jan 30 2016 19:22:45)'
Number of devices = 0
And I can get devices number with pyAudio
import pyaudio
pa = pyaudio.PyAudio()
print(pa.get_default_input_device_info())
print(pa.get_device_count())
--- output ---
{'defaultHighInputLatency': 0.034829931972789115, 'maxInputChannels': 32, 'defaultLowOutputLatency': 0.008707482993197279, 'defaultLowInputLatency': 0.008707482993197279, 'defaultSampleRate': 44100.0, 'hostApi': 0, 'structVersion': 2, 'maxOutputChannels': 32, 'defaultHighOutputLatency': 0.034829931972789115, 'name': 'default', 'index': 6}
7
Should I install something or re-built portaudio with some special settings? Thanks!
I ran into this exact same problem. It was because portaudio was built with only support for OSS. You need to build it with ALSA support. Note that even if you specify --with-alsa to the ./configure script it still "succeeds" even if it can't find ALSA - you have to manually check the configuration summary for a line like this:
ALSA ........................ no
(Don't you love autotools?)
Anyway do this:
sudo apt-get install libasound2-dev
./configure
And so on. I unfortunately couldn't find a way to get pa_devs to list what backends it supports, so you just have to guess this is the problem and try it. Worked for me anyway!
I am using OpenCV 2.4.5 on Ubuntu 12.04 64-bit. I would like to be able to set the resolution of the input from my Logitech C310 webcam. The camera supports up to 1280x960 at 30fps, and I am able to view the video at this resolution in guvcview. But OpenCV always gets the video at only 640x480.
Trying to change the resolution with cap.set(CV_CAP_PROP_FRAME_WIDTH, 1280) and cap.set(CV_CAP_PROP_FRAME_HEIGHT, 960) immediately after the VideoCapture cap is created has no effect; trying to set them immediately before getting every frame causes the program to crash immediately. I cannot reduce the resolution with this method either. I am also getting the error "HIGHGUI ERROR: V4L/V4L2: VIDIOC_S_CROP". I think this may be related, because it appears once when the VideoCapture is created, and once when I try to set the width and height (but, oddly, not if I try to set only one of them).
I know I'm not the first to have this problem, but I have yet to find a solution after much Googling and scouring of SO and elsewhere on the internet (among the many things I've already tried to no avail is the answer to this StackOverflow question: Increasing camera capture resolution in OpenCV). Is this a bug in OpenCV? If so, it's a rather glaring one.
Here's an example of code that exhibits the problem (just a modified version of OpenCV's video display code):
#include <cv.h>
#include <highgui.h>
using namespace cv;
int main(int argc, char** argv)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
cap.set(CV_CAP_PROP_FRAME_WIDTH, 160);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
Mat image;
namedWindow("Video", CV_WINDOW_AUTOSIZE);
while(1)
{
// cap.set(CV_CAP_PROP_FRAME_WIDTH, 160);
// cap.set(CV_CAP_PROP_FRAME_HEIGHT, 120);
cap >> image;
imshow("Video", image);
if(waitKey(10) == 99 ) break;
}
return
}
As it is, that gets me two "HIGHGUI ERROR"s as described above and I get a 640x480 output. I know that 160x120 is a resolution that my camera supports from running v4l2-ctl --list-formats-ext. If I uncomment the two commented-out lines in the while loop, the program crashes immediately.
These might be related or have possible solutions: http://answers.opencv.org/question/11427/decreasing-capture-resolution-of-webcam/, http://answers.opencv.org/question/30062/error-setting-resolution-of-video-capture-device/
This is a bug in the v4l "version" (build) of OpenCV 2.4 (including 2.4.12), but the bug is not in the libv4l version. For OpenCV 3.1.0, neither the v4l nor the libv4l version has the bug.
(Your error error message HIGHGUI ERROR: V4L/V4L2: VIDIOC_S_CROP indicates that you have the v4l version; the message is in cap_v4l.cpp, see code, but not in cap_libv4l.cpp.)
A workaround to get the v4l version of OpenCV 2.4 to work at a fixed resolution other than 640x480 is
changing the values for DEFAULT_V4L_WIDTH and DEFAULT_V4L_HEIGHT in
modules/highgui/src/cap_v4l.cpp and re-building OpenCV, kudos to this
answer.
If you want to build the libv4l version instead, all you likely need to do is
install libv4l-dev and rebuild OpenCV; WITH_LIBV4L was enabled by default for me. If it is not, your cmake command should contain
-D WITH_LIBV4L=ON
The cmake output (or version_string.tmp) for a libv4l build contains something like
Video I/O:
...
V4L/V4L2: Using libv4l1 (ver 0.8.6) / libv4l2 (ver 0.8.6)
(For a v4l build, it is just V4L/V4L2: NO/YES.)
Just wanted to add my CMAKE options to build with Java on the Raspberry Pi 3 based on Ulrich's comprehensive answer for OpenCV 3.2.0. Make a /build folder a in the same folder as OpenCV CMakeList.txt and execute this script for the new /build folder:
sudo cmake -D CMAKE_BUILD_TYPE=RELEASE -D WITH_OPENCL=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_SHARED_LIBS=OFF -D JAVA_INCLUDE_PATH=$JAVA_HOME/include -D JAVA_AWT_LIBRARY=$JAVA_HOME/jre/lib/arm/libawt.so -D JAVA_JVM_LIBRARY=$JAVA_HOME/jre/lib/arm/server/libjvm.so -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_TESTS=OFF -D WITH_MATLAB=OFF -D WITH_CUFFT=OFF -D WITH_CUDA=OFF -D WITH_CUBLAS=OFF -D WITH_GTK=OFF -D WITH_WEBP=OFF -D BUILD_opencv_apps=OFF -D BUILD_PACKAGE=OFF -D WITH_LIBV4L=ON ..
You can use v4l2-ctl to set frame size of captured video like below.
v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=1
You can find more information at this link
Maybe you can try this, but I am not sure if this is what you want:
#include <X11/Xlib.h>
Display* disp = XOpenDisplay(NULL);
Screen* scrn = DefaultScreenOfDisplay(disp);
int height = scrn->height;
int width = scrn->width;
//Create window for the ip cam video
cv::namedWindow("Front", CV_WINDOW_NORMAL);
cvSetWindowProperty( "Front", CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN );
//Position of the screen where the video is shows
cvMoveWindow("Front", 0, 0);
cvResizeWindow( "Front", width, height );
Like this you get the full screen for any screen.