I know that this is an opencv error, and I know that Flask has nothing to do with opencv. However, please stick with me through the end.
I'm getting this really weird error ONLY when I'm streaming the CV frames:
VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
Unable to stop the stream: Device or resource busy
My code:
// my camera_detector class does some AI works on the camera frame, other than that nothing special
camera = camera_detector(my_arguments)
#app.route('/')
def index():
return render_template('index.html')
def gen(camera):
while True:
print('getting frame')
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')
#app.route('/feed')
def video_feed():
return Response(gen(camera), mimetype='multipart/x-mixed-replace; boundary=frame')
Now here is why I said this only happens with Flask, if I just grab the camera frame like this:
while True:
frame = camera.get_frame()
without ever using flask, everything runs just fine o_0
If it makes any difference, I'm using python3.7 on the pi4. My camera also does some AI works on camera_frame produced by open cv, draw boxes, labels, before returning the frame to Flask:
def get_frame(self):
ret, frame = self.camera.read()
# processing, does AI works, draw boxes and labels
ret, jpeg = cv2.imencode('.jpg', frame)
return jpeg.tobytes()
[Edit] camera info if it helps:
{20-04-22 15:39}raspberrypi:~/detect pi% v4l2-ctl -d /dev/video0 --list-formats
ioctl: VIDIOC_ENUM_FMT
Type: Video Capture
[0]: 'YUYV' (YUYV 4:2:2)
[1]: 'MJPG' (Motion-JPEG, compressed)
[SOLVED]: Answer is below for those who are interested.
For those who got had the same issue, I found out what was causing it. The problem was that I created a camera instance like this:
camera = camera_detector(my_arguments)
and then passed that into my route function:
#app.route('/feed')
def video_feed():
return Response(gen(camera), mimetype='multipart/x-mixed-replace; boundary=frame')
Turn out opencv did not like that so much. I found this very odd, but it works fine after I changed it to:
#app.route('/feed')
def video_feed():
return Response(gen(camera_detector(my_arguments)), mimetype='multipart/x-mixed-replace; boundary=frame')
Any body who have an explanation for this would be nice!
It looks like you're borrowing code from https://blog.miguelgrinberg.com/post/video-streaming-with-flask
Comparing the two, your snippet has an extra \r\n' at the end of each frame. Try removing that.
Related
I am using person-detection-action-recognition-0005 pre-trained model from openvino to detect the person and their action.
https://docs.openvinotoolkit.org/latest/_models_intel_person_detection_action_recognition_0005_description_person_detection_action_recognition_0005.html
From the above link, I wrote a python script to get detections.
This is the script.
import cv2
def main():
print(cv2.__file__)
frame = cv2.imread('/home/naveen/Downloads/person.jpg')
actionNet = cv2.dnn.readNet('person-detection-action-recognition-0005.bin',
'person-detection-action-recognition-0005.xml')
actionBlob = cv2.dnn.blobFromImage(frame, size=(680, 400))
actionNet.setInput(actionBlob)
# detection output
actionOut = actionNet.forward(['mbox_loc1/out/conv/flat',
'mbox_main_conf/out/conv/flat/softmax/flat',
'out/anchor1','out/anchor2',
'out/anchor3','out/anchor4'])
# this is the part where I dont know how to get person bbox
# and action label for those person fro actionOut
for detection in actionOut[2].reshape(-1, 3):
print('sitting ' +str( detection[0]))
print('standing ' +str(detection[1]))
print('raising hand ' +str(detection[2]))
Now, I don't know how to get bbox and action label from the output variable(actionOut). I am unable to find any documentation or blog explaining this.
Does someone have any idea or suggestion, how it can be done?
There is a demo called smart_classroom_demo: link
This demo uses the network you are trying to run.
The parsing of outputs is located here
The implementation is in C++ but it should help you to understand how outputs of the network are parsed.
Hope it will help.
I am trying to use Haar cascade classifier for object detection.I have copied a code for haar cascade algorithm but its not working.It's giving error as
unknown url type: '//drive.google.com/drive/folders/11XfAPOgFv7qJbdUdPpHKy8pt6aItGvyg'
even though this link is working.
import urllib.request, urllib.error, urllib.parse
import cv2
import os
def store_raw_images():
neg_images_link = '//drive.google.com/drive/folders/11XfAPOgFv7qJbdUdPpHKy8pt6aItGvyg'
neg_image_urls = urllib.request.urlopen(neg_images_link).read().decode()
pic_num = 1
if not os.path.exists('neg'):
os.makedirs('neg')
for i in neg_image_urls.split('\n'):
try:
print(i)
urllib.request.urlretrieve(i, "neg/"+str(pic_num)+".jpg")
img = cv2.imread("neg/"+str(pic_num)+".jpg",cv2.IMREAD_GRAYSCALE)
# should be larger than samples / pos pic (so we can place our image on it)
resized_image = cv2.resize(img, (100, 100))
cv2.imwrite("neg/"+str(pic_num)+".jpg",resized_image)
pic_num += 1
except Exception as e:
print(str(e))
store_raw_images()
I am expecting output as set of negative images for creating dataset module for object detection.
I think the missing "https:" at the start of the url is the causing the specific error.
Furthermore, you cannot just load a drive folder when it is not shared (you should use the drive link) and event then it is not optimal, you have to parse the html response and it may not even work.
I strongly suggest you to use a normal HTTP server or the Google Drive python API.
I'm trying to record from a Logitech Brio at 60fps, preferably at 1080p.
It should work because I can get it working on OBS and many others have achieved the settings.
Here is the code I am using to try to capture at this rate:
// Do some grabbing
cv::VideoCapture video_capture;
video_capture.set(cv::CAP_PROP_FRAME_WIDTH, 1920);
video_capture.set(cv::CAP_PROP_FRAME_HEIGHT, 1080);
video_capture.set(cv::CAP_PROP_FPS, 60);
{
INFO_STREAM("Attempting to capture from device: " << device);
video_capture = cv::VideoCapture(device);
// Read a first frame often empty in camera
cv::Mat captured_image;
video_capture >> captured_image;
}
if (!video_capture.isOpened())
{
FATAL_STREAM("Failed to open video source");
return 1;
}
else INFO_STREAM("Device or file opened");
cv::Mat captured_image;
video_capture >> captured_image;
What should I be doing differently for the Brio?
I had the same problem: same camera, couldn't change resolution or fps . After hours of working on this and digging the internet I found a solution:
Need to use DSHOW and need to instead read from capture device 1 (instead of 0). Code below for reference
fourcc = cv2.VideoWriter_fourcc('M','J','P','G')
cap = cv2.VideoCapture()
cap.open(cameraNumber + 1 + cv2.CAP_DSHOW)
cap.set(cv2.CAP_PROP_FOURCC, fourcc)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
cap.set(cv2.CAP_PROP_FPS, 60)
sorry I only did this in Python, but I hope the same solution works in c++
I assume you can do something along the lines of
video_capture = cv::VideoCapture(device + 1 + cv::CAP_DSHOW);
With OpenCV 4.1.0, this achieved 4K video under Windows, with my Logitech BRIO. The important thing in the end seemed to be to use CAP_DSHOW and to set the resolution after initialising the camera, not before.
cv::VideoCapture capture;
capture = cv::VideoCapture(cv::CAP_DSHOW);
if (!capture.isOpened())
{
cerr << "ERROR: Can't initialize camera capture" << endl;
return 1;
}
capture.set(cv::CAP_PROP_FRAME_WIDTH, 3840);
capture.set(cv::CAP_PROP_FRAME_HEIGHT, 2160);
capture.set(cv::CAP_PROP_FPS, 30);
I think the problem has nothing to do with the camera. The code might not work because you are creating a separate scope for opening the video capture. Upon exiting that scope, the destructor of video_capture instance will be called and therefore the !isOpened() check will always return true. I can't understand why you use those braces. Instead it should be:
INFO_STREAM("Attempting to capture from device: " << device);
auto video_capture = cv::VideoCapture(device);
if (!video_capture.isOpened())
{
FATAL_STREAM("Failed to open video source");
return 1;
}
cv::Mat captured_image;
video_capture.set(cv::CAP_PROP_FRAME_WIDTH, 1920);
video_capture.set(cv::CAP_PROP_FRAME_HEIGHT, 1080);
video_capture.set(cv::CAP_PROP_FPS, 60);
INFO_STREAM("Device or file opened");
video_capture >> captured_image;
After some troubleshooting of my own, I found that #ffarhour's solution worked for me.
fourcc = cv2.VideoWriter_fourcc('M','J','P','G')
cap = cv2.VideoCapture()
cap.open(cameraNumber + 1 + cv2.CAP_DSHOW)
cap.set(cv2.CAP_PROP_FOURCC, fourcc)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
cap.set(cv2.CAP_PROP_FPS, 60)
But for anyone troubleshooting this in the future, I also want to add that you need a good USB cable and preferably direct access to a usb 3.0 port. I had the camera in my dock first, and only 1080p worked.
A good check is the windows 10 built in camera tool. My experience is that high resolutions will only show as options here, if your cable and usb port support it. So do not bother with Opencv if this does not work first.
Aditionally for those just starting out: the code snippit above goes before your opencv while loop. and Inside your while loop you add:
while True:
ret, frame = cap.read()
frame = cv2.cvtColor(frame, 1)
cv2.imshow('original image', frame)
cv2.waitKey(2)
Finally I want to note that 30 fps worked best for me (better than 24 fps) and that the max available resolution from the camera is NOT 3840 x 2160 pixel but 4096 x 2160, how cool is that?
I also strongly advise to download the logitech driver for brio called 'logitech camera settings' it allows you to set the FOV, autofocus and other things you otherwise could never access.
I use puthon 2.7, windows 7 and opencv 2.4.6. and I try to run the following code:
https://github.com/kyatou/python-opencv_tutorial/blob/master/08_image_encode_decode.py
#import opencv library
import cv2
import sys
import numpy
argvs=sys.argv
if (len(argvs) != 2):
print 'Usage: # python %s imagefilename' % argvs[0]
quit()
imagefilename = argvs[1]
try:
img=cv2.imread(imagefilename, 1)
except:
print 'faild to load %s' % imagefilename
quit()
#encode to jpeg format
#encode param image quality 0 to 100. default:95
#if you want to shrink data size, choose low image quality.
encode_param=[int(cv2.IMWRITE_JPEG_QUALITY),90]
result,encimg=cv2.imencode('.jpg',img,encode_param)
if False==result:
print 'could not encode image!'
quit()
#decode from jpeg format
decimg=cv2.imdecode(encimg,1)
cv2.imshow('Source Image',img)
cv2.imshow('Decoded image',decimg)
cv2.waitKey(0)
cv2.destroyAllWindows()
I keep getting the following error:
encode_param=[int(cv2.IMWRITE_JPEG_QUALITY), 90]
AttributeError: 'module' object has no attribute 'IMWRITE_JPEG_QUALITY'
I have tried a lot of things: reinstall opencv, convert cv2 to cv code and searched different forums but I keep getting this error. Am I missing something? Is there someone who can run this code without getting the error?
BTW: Other opencv code (taking pictures from webcam) runs without problems....
At the moment I save the image to a temp JPG file. Using the imencode function I want to create the jpg file in the memory.
Thanks in advance and with best regards.
The problem is not in your code, it should work, but it is with your OpenCV Python package. I can't tell you why is raising that error, but you can avoid it by changing the line of the encode_param declaration by this one:
encode_param=[1, 90]
Recently I migrated to OpenCV 2.4.3 from OpenCV 2.4.1.
My program which worked well with 2.4.1 version now encounters problem with 2.4.3.
The problem is related to VideoCapture that can not open my video file.
I saw a similar problem while searching the internet, but I couldn't find a proper solution for this. Here is my sample code:
VideoCapture video(argv[1]);
while(video.grab())
{
video.retrieve(imgFrame);
imshow("Video",ImgFrame);
waitKey(1);
}
It's worth mentioning that capturing video from webcam device works well, but I want to grab stream from file.
I'm using QT Creator 5 and I compiled OpenCV with MinGW. I'm using Windows.
I tried several different video formats and I rebuilt OpenCV with and without ffmpeg, but the problem still persists.
Any idea how to solve the problem?
Try this:
VideoCapture video(argv[1]);
int delay = 1000.0/video.get(CV_CAP_PROP_FPS);
while(1)
{
if ( !video.read(ImgFrame)) break;
imshow("Video",ImgFrame);
waitKey(delay);
}
In my experience with OpenCV I struggled using IP cams until my mentor discovered how to get them to work, don't forget to plug your IP address in otherwise it won't work!
import cv2
import numpy as np
import urllib.request
# Sets up the webcam and connects to it and initalizes a variable we use for it
stream=urllib.request.urlopen('http://xx.x.x.xx/mjpg/video.mjpg')
bytes=b''
while True:
# Takes frames from the camera that we can use
bytes+=stream.read(16384)
a = bytes.find(b'\xff\xd8')
b = bytes.find(b'\xff\xd9')
if a!=-1 and b!=-1:
jpg = bytes[a:b+2]
bytes= bytes[b+2:]
frame = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8),cv2.IMREAD_COLOR)
img = frame[0:400, 0:640] # Camera dimensions [0:WIDTH, 0:HEIGHT]
# Displays the final product
cv2.imshow('frame',frame)
cv2.imshow('img',img)
# Hit esc to kill
if cv2.waitKey(1) ==27:
exit(0)