I purchased two webcams (Logitech C310 HD Webcam) to use with RaspberryPi (RPi 3 B+ model). When I run individual cameras, they are running fine but when I tried to run both cameras at the same time it didn't run. I came to know that it may be due to less power in Raspberry Pi, so I purchased a powered USB hub (Power USB hub). When I attached both cameras with raspberry pi through a power USB hub, it shows an error.
Unable to stop the stream: Invalid argument
OpenCV(3.4.1) Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /home/pi/opencv-3.4.1/modules/highgui/src/window.cpp, line 356
Traceback (most recent call last):
File "two cameras simu.py", line 7, in <module>
cv2.imshow('frame1',frame1)
cv2.error: OpenCV(3.4.1) /home/pi/opencv-3.4.1/modules/highgui/src/window.cpp:356: error: (-215) size.width>0 && size.height>0 in function imshow
The code I used is:
import cv2
import numpy as np
cam1 = cv2.VideoCapture(1)
cam2 = cv2.VideoCapture(2)
while (1):
_,frame1 = cam1.read()
cv2.imshow('frame1',frame1)
_,frame2 = cam2.read()
cv2.imshow('frame2',frame2)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cam1.release()
cam2.release()
cv2.destroyAllWindows()
While the same code I run in the laptop (in PyCharm) with the power USB hub attached, it works fine.
Why there is an error while trying to run two cameras with Raspberry Pi? How I can run two webcams using Raspberry Pi.
Try adding at the top
from imutils import VideoStream
import imutils
then change the input source from your cameras accordingly for example
cam1 = VideoStream(src=0).start()
hope this solves your problem
Related
I'm new to coding and using a raspberry pi. I've searched through many tutorials online and found a how to get OpenCV library into the pi itself and downloaded VSC on my laptop and the pi. The issue that I'm having is the code that I used on my laptop doesn't work the same on the pi. I've been getting errors on my code that dosen't show in in my laptop VSC.
the purpose is to display a live feed from the camera in the raspberry pi
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read()
frame = cv2.resize(frame, (0,0), fx=0.5,fy=0.5)
cv2.imshow("Frame",frame)
ch = cv2.waitKey(1)
if ch & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
(line 10) error: (-206:Bad flag (parameter or structure field)) Unrecognized or unsupported array type in function 'cvGetMat'
(line 9) error: (-215:Assertion failed) !ssize.empty() in function 'resize'
I think you have to use different device id in cv.VideoCapture(id)
because raspberry is using linux and at some point the id is not always 0
Our HPC node has 2 K80 GPUs. When I run following code on the HPC node using python, the code will detect 2 GPUs and display "gpu device types:['TeslaK80', 'TeslaK80']"
But when I run the same code with DASK, it can only detect 1 GPU. It displays "gpu device types:['TeslaK80']"
Following is the code to detect GPU
import tensorflow as tf
def init_gpu()
print("\n\n\n ... tensorflow version = ", tf.__version__)
from tensorflow.python.client import device_lib
local_device_protos = device_lib.list_local_devices()
print("local device protos:{0}".format(local_device_protos))
_gpu_raw_info = [(x.name,x.physical_device_desc) for x in local_device_protos if x.device_type == 'GPU']
print("gpu raw info:{0}".format(_gpu_raw_info))
_gpu_names = [x[0] for x in _gpu_raw_info]
_gpu_devices = [x[1] for x in _gpu_raw_info]
_gpu_device_types = [x.split(':')[2].split(',')[0].replace(' ','') for x in _gpu_devices]
print("gpu device types:{0}".format(_gpu_device_types))
Following is the DASK LSF cluster code to launch job on the cluster:
cluster = LSFCluster(queue=queue_name, project=hpc_project, alltime='80:00', cores=1, processes=1, local_directory='dask-worker-space', memory='250GB', job_extra=['-gpu "num=2"'], log_directory='scheduler_log', dashboard_address=':8787'))
cluster.scale(1* 1)
client = Client(cluster.scheduler_address, timeout=60)
wbsd_results = []
r = dask.delayed(init_gpu)()
wbsd_results.append(r)
client.compute(wbsd, sync=True)
Please help. Thanks.
I can able to access the inbuilt laptop camera with the following code
import cv2
#import numpy as np
cap = cv2.VideoCapture(0)
while (True):
ret, frame = cap.read()
key = cv2.waitKey(20)
cv2.imshow("preview", frame)
if key == 27: # exit on ESC
break
cap.release()
cv2.destroyAllWindows()
I connected a USB camera. I gave 1 instead of 0 to access it. but it is showing an error. By changing it to between -1 and -99, I can access the inbuilt cam only. How can I access the USB cam? I checked the working condition of USB cam with cheese. It is working fine.
I am trying to read a rtsp stream from my Ip camera using Opencv and running Linux. The camera is a Floureon IPC 360 from China. I am trying to develop some facial recognition code.
I am using the following code:
import numpy as np
import cv2
vcap = cv2.VideoCapture("rtsp://192.168.1.240:554/realmonitor?channel=0")
print(vcap)
while(1):
ret, frame = vcap.read()
print (ret,frame)
cv2.imshow('VIDEO', frame)
#cv2.imwrite('messigray.png',frame)
cv2.waitKey(1)
$ python w.py
<VideoCapture 0x7fc685598230>
(False, None)
Traceback (most recent call last):
File "w.py", line 9, in <module>
cv2.imshow('VIDEO', frame)
cv2.error: OpenCV(4.1.0) /io/opencv/modules/highgui/src/window.cpp:352: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'imshow'
cv2.imshow is failing as the frame is 'None' & (ret is False).
In a separate window I can run openRTSP :
./openRTSP -4 -P 10 -F cam_eight -t -d 8 rtsp://192.168.1.240:554/realmonitor?channel=0
Which creates me a nice mp4 file that I can play:
107625 Sep 12 19:08 cam_eight-00000-00010.mp4
OpenRTSP works with or without the t (tcp).
I have also tried supplying the admin:123456 credentials to the cv2.VideoCapture line, which openRTSP doesn't appear to require.
Any ideas why cv2.VideoCapture is apparently failing ?
I have tried variants of the above code, but nothing seems to work.
I have enabled ONVIF on the camera
According to other answers, it isn't possible to acquire ONVIF streams with OpenCV, since it defaults the stream to use the tcp protocol, while ONVIF relies on udp.
You should define the environment variable OPENCV_FFMPEG_CAPTURE_OPTIONS to skip the default setting to tcp, as can be seen in the original source code here:
OPENCV_FFMPEG_CAPTURE_OPTIONS=whatever
If you want to properly configure the capture options, then you should refer to the ffmpeg documentation, which is used internally by OpenCV.
As stated in the linked answer, keys and values are separated with ; and pairs are separated via |.
I want to get both depth and video from streams from the kinect to my opencv code. I am working in Linux. I have installed libfreenect module for depth. However, there is only one device listed in /dev/. Now, when I connect the Kinect to my pc and run
camorama -d /dev/video0
I get the depth map. Then, I access the device using videocapture in opencv and I get the rgb video. Now, if I again run the camorama command, I get the rgb video this time. I can't figure out what's happening. I basically want both the stream in my opencv code. Please help.
Run this python script:
import freenect
import cv2
import numpy as np
from functions import *
def nothing(x):
pass
kernel = np.ones((5, 5), np.uint8)
def pretty_depth(depth):
np.clip(depth, 0, 2**10 - 1, depth)
depth >>= 2
depth = depth.astype(np.uint8)
return depth
while 1:
orig = freenect.sync_get_video()[0]
orig = cv2.cvtColor(orig,cv2.COLOR_BGR2RGB)
dst = pretty_depth(freenect.sync_get_depth()[0])#input from kinect
cv2.imshow('Disparity', dst)
cv2.imshow('RGB',orig)
if cv2.waitKey(1) & 0xFF == ord('b'):
break