I'm trying to connect to my laptop camera as live streaming but it doesn't work. cv2.imshow() provokes an error. I'm using Python 3.6 and OpenCV 4.1.0 on windows 10.
I've tried to rebuild the library GTK+ 2.x but nothing changed.
import cv2
cap = cv2.VideoCapture(0)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
while True:
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Here's the error:
Traceback (most recent call last):
File "C:/Users/hiba/PycharmProjects/Python for CV/12-Connecting_to_Camera.py", line 21, in <module>
cv2.imshow('frame', gray)
cv2.error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\highgui\src\window.cpp:627: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvShowImage'
[ WARN:0] terminating async callback
Process finished with exit code 1
pip install opencv-contrib-python
Related
I'm trying to run this elephas tutorial on Colab.
I prepared the environment with
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q https://downloads.apache.org/spark/spark-2.4.6/spark-2.4.6-bin-hadoop2.7.tgz
!tar xf spark-2.4.6-bin-hadoop2.7.tgz
!pip install -q findspark
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.4.6-bin-hadoop2.7"
import findspark
findspark.init("spark-2.4.6-bin-hadoop2.7")
!pip install elephas
When I fit the model
pipeline = Pipeline(stages=[estimator])
fitted_pipeline = pipeline.fit(df)
I get the following error message
>>> Fit model
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-11-6d2ae7604dd2> in <module>()
1 # Fitting a model returns a Transformer
2 pipeline = Pipeline(stages=[estimator])
----> 3 fitted_pipeline = pipeline.fit(df)
11 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizers.py in get(identifier)
901 else:
902 raise ValueError(
--> 903 'Could not interpret optimizer identifier: {}'.format(identifier))
ValueError: Could not interpret optimizer identifier: 1e-06
As you can see, the error relates to decay (decay=1e-6). Anyway, I still get the same error even when I change this value.
sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
sgd_conf = optimizers.serialize(sgd)
Any ideas?
This may have had to do with an incompatibility if you were working with Tensorflow 2.0 API. I would recommend retrying with the latest release: https://github.com/danielenricocahall/elephas/releases/tag/1.0.0 which now contains support for Tensorflow 2.1.x and Tensorflow 2.3.x.
I upgraded OpenCV 4.4.0 from OpenCV 4.2.x with pip command :
pip install --upgrade opencv-python==4.4.0.40
and checked the upgrade completed :
Collecting opencv-python==4.4.0.40
|████████████████████████████████| 33.5 MB 283 kB/s
Requirement already satisfied, skipping upgrade: numpy>=1.17.3 in c:\users\kangs\anaconda3\lib\site-packages (from opencv-python==4.4.0.40) (1.18.5)
Found existing installation: opencv-python 4.3.0.36
Successfully uninstalled opencv-python-4.3.0.36
PS C:\Users\kangs\Documents\TELPA\Source\numberplateRecognition> python
Python 3.8.3 (default, Jul 2 2020, 17:30:36) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> cv2.__version__
'4.4.0'
>>> exit()
However, I still get this error message when I run this code :
cv2.dnn.readNet(WEIGHTS, CFG)
Error message :
Traceback (most recent call last):
File "c:\Users\kangs\Documents\TELPA\Source\numberplateRecognition\number_plate_v4c.py", line 54, in <module>
File "c:\Users\kangs\Documents\TELPA\Source\numberplateRecognition\algorithm\detection\yolodetector.py", line 39, in __init__
self.load_dir(dir_path)
File "c:\Users\kangs\Documents\TELPA\Source\numberplateRecognition\algorithm\detection\yolodetector.py", line 50, in load_dir
self.load_files(file_list)
File "c:\Users\kangs\Documents\TELPA\Source\numberplateRecognition\algorithm\detection\yolodetector.py", line 78, in load_files
self.net = cv2.dnn.readNet(WEIGHTS, CFG)
cv2.error: OpenCV(4.2.0) C:\projects\opencv-python\opencv\modules\dnn\src\darknet\darknet_io.cpp:686: error: (-212:Parsing error) Unsupported activation: mish in function 'cv::dnn::darknet::ReadDarknetFromCfgStream'
I found out that the code still linked to the previous library OpenCV 4.2.0.
Can someone please let me know why this happens and how to solve this problem?
I am new to OpenCV and Google Colab. I have been working on a project that requires me to take the real-time image frames from the webcam and process it. But the problem is from the below code the 'frame' always returns a 'None' type and my webcam does not seem to switch on. But using the example code from Colab to capture images works fine:
How to use cap = cv2.VideoCapture(0) in Google Colab
Here is the code that fails:
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
---> 19 frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
error: OpenCV(3.4.3) /io/opencv/modules/imgproc/src/color.cpp:181: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
try replace the first line with
frame = cv2.imread('your_image.png',0)
If it works, then the high chance is your camera issue.
There could be multiple reason. try
sudo apt-get install ffmpeg
sudo apt-get install cheese
cheese
to see if you can get video feed in ubuntu. If can, then its opencv config issue. if cannot then its either driver or hardware issue.
If its driver issue. follow https://help.ubuntu.com/community/Webcam to driver
If hardware broke down, not much you can do software
I'm working on a project that I found online (Yolo Object Detection with OpenCV, one of Pyimageresearch projects). So, I downloaded the whole code and saved it in the Downloads folder as it was recommended the run the cmd line script:
python /home/ubuntu/Downloads/yolo-object-detection/yolo_video.py \
> --input /home/ubuntu/Downloads/yolo-object-detection/videos/WS-1sec.mp4 \
> --output /home/ubuntu/Downloads/yolo-object-detection/output/WS-1sec.avi \
> --yolo /home/ubuntu/Downloads/yolo-object-detection/yolo-coco
but the output was:
[INFO] loading YOLO from disk...
OpenCV(3.4.1-dev) Error: Parsing error (Unknown layer type: shortcut) in ReadDarknetFromCfgFile, file /home/ubuntu/src/opencv/modules/dnn/src/darknet/darknet_io.cpp, line 503
Traceback (most recent call last):
File "/home/ubuntu/Downloads/yolo-object-detection/yolo_video.py", line 42, in <module>
net = cv2.dnn.readNetFromDarknet(configPath, weightsPath)
cv2.error: OpenCV(3.4.1-dev) /home/ubuntu/src/opencv/modules/dnn/src/darknet/darknet_io.cpp:503: error: (-212) Unknown layer type: shortcut in function ReadDarknetFromCfgFile
I'm running the same exact version of OpenCV 3.4.1 on another machine and it worked there! This time I'm working on the Tetson TX2 but didn't rum!
Link to original project is here.
Any idea why these error occurs please!?
I think you might have the wrong OpenCV version. Check this answer:
OpenCV unknown layer type running darknet detect
"Support for running YOLOv3 has been added to OpenCV master branch (3.4.3)."
I flashed my Jetson TX1 with the latest Jetpack (Linux For Tegra R23.2), and the following command works perfectly:
gst-launch-1.0 nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvtee ! nvvidconv flip-method=2 ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvoverlaysink -e
I tried to use the following python program to receive images from webcam:
source: http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
I got the following error:
OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file /hdd/buildbot/slave_jetson_tx_2/35-O4T-L4T-Jetson-L/opencv/modules/imgproc/src/color.cpp, line 3739
Traceback (most recent call last):
File "webcam.py", line 11, in <module>
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.error: /hdd/buildbot/slave_jetson_tx_2/35-O4T-L4T-Jetson-L/opencv/modules/imgproc/src/color.cpp:3739: error: (-215) scn == 3 || scn == 4 in function cvtColor
I know the problem is that it cannot receive images from webcam. I also changed the code to just show the received image from webcam, but it gives me error that means no image it get from camera.
I also tried to use C++ with the following code:
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int argc, char** argv)
{
VideoCapture cap;
// open the default camera, use something different from 0 otherwise;
// Check VideoCapture documentation.
if(!cap.open(0))
return 0;
for(;;)
{
Mat frame;
cap >> frame;
if( frame.empty() ) break; // end of video stream
imshow("this is you, smile! :)", frame);
if( waitKey(1) == 27 ) break; // stop capturing by pressing ESC
}
// the camera will be closed automatically upon exit
// cap.close();
return 0;
}
and it compiled without any errors using
g++ webcam.cpp -o webcam `pkg-config --cflags --libs opencv`
But again, when I'm running the program I receive this error:
$ ./webcam
Unable to stop the stream.: Device or resource busy
Unable to stop the stream.: Bad file descriptor
VIDIOC_STREAMON: Bad file descriptor
Unable to stop the stream.: Bad file descriptor
What I've missed? Is there any command I should run to activate webcam before running this program?
According to the nvidia forums you need to get the gstreamer pipeline correct. And at the moment opencv can not autodetect the stream for the nvcamera.
The only way I got it to work, was with Opencv3 and this line of code to grab the video:
cap = cv2.VideoCapture("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)I420 ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")
Thanks for all the information from this thread, anyway, I just got my python works to get one frame from TX1 camera module.
The important thing to get it work is that we need install OpenCV 3.1.0, you can follow the official build method and replace python cv2.so lib to version 3.1.0. Original l4t OpenCV is 2.4.
And the other important thing is to use correct nvcamerasrc; try
c
ap = cv2.VideoCapture("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1 ! nvtee ! nvvidconv flip-method=2 ! video/x-raw(memory:NVMM), format=(string)I420 ! nvoverlaysink -e ! appsink")
It works on my TX1, just for sharing here.