I am trying to do face detection in Open CV , however when i run my code i see camera light in laptop on , but the frame does not show up on system .
Can anyone help ?
below is the code
print("[INFO] loading model .....")
net = cv2.dnn.readNetFromCaffe(args["prototxt"],args["model"])
initialize video stream
print("[INFO] starting video stream ....")
vs =VideoStream(src=0).start()
time.sleep(1.0)
while True:
frame = vs.read()
frame =imutils.resize(frame,width=400)
(h,w)=frame.shape[:2]
blob = cv2.dnn.blobFromImage(cv2.resize(frame,(300,300)),1.0,(300,300),(104.0,177.0,123.0))
Try changing src=1 instead of src=0
vs =VideoStream(src=1).start() time.sleep(1.0)
Related
image of the program execution
I'm trying to make a program that shows a video streaming of an usb camera and a tkinter app. I use opencv for displaying the image but i'm not plotting the image in the tkinter app.
The program is for controlling a differential robot using the information about the position and orientation obtained from the camera. The first step is to connect the pc to the raspberry (the raspberry sends via serial port commands to the robot) and when I click the button "Conectar" the camera stream stops. But if I use the laptop camera, the camera stream doesn't stop. I don't undserstand why. But I need to not stop the streaming because it also happens with the button "Ir al punto", which execute the function that lead the robot to the destination point. And if the streaming is stopped, the information about the position and orientation is not correct and the robot can't reach the point.
I use threading to display the camera:
# Creation of the tkinter app: labels, buttons....
# I'm not showing this because is too long
# Camera parameters
cv2.namedWindow("Zoom")
cv2.moveWindow("Zoom", 0,512)
cv2.moveWindow("Visualization", 0,0)
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FPS, 30)
detector = apriltag.Detector()
zoomed = np.zeros((300, 300, 3), dtype=np.uint8)
# Threading
t1 = threading.Thread(target=show_camera2) # show_camera2() is the typical opencv videocapture imshow loop
t1.start()
root.mainloop()
I've tried to merge the opencv windows in the tkinter app using this:
imgframe = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA)
img1 = Image.fromarray(imgframe)
imgtk1 = ImageTk.PhotoImage(image=img1)
label_camera.imgtk = imgtk1
label_camera.configure(image=imgtk1)
label_camera.update()
but still stopping when pressing a button.
I am trying to make something like a video player using Python. Quick google search showed me how to play a video using OpenCV. But video rendered using OpenCV is not as crisp as the video played by VLC media player. The images of both players are shown below.
OpenCV rendering
Video in VLC media player
I have checked the width and height of the images rendered by OpenCV and it is 1080p. But somehow the video is not as crisp as it should be. Here is the code used to render the images.
def start_slideshow_demo(video_file_path: str):
cap = cv2.VideoCapture(video_file_path)
cv2.namedWindow(video_file_path, cv2.WINDOW_GUI_EXPANDED)
cv2.setWindowProperty(video_file_path, cv2.WND_PROP_FULLSCREEN, cv2.WINDOW_FULLSCREEN)
while(cap.isOpened()):
ret, frame = cap.read()
if ret == True:
cv2.imshow(video_file_path, frame)
if cv2.waitKey(25) & 0xFF == ord('q'):
break
else:
break
cap.release()
cv2.destroyAllWindows()
Any help is appreciated. Thank you.
i don think it's Opencv issue. I used your code as it is in my desktop.
(left : opencv - right VLC player)
My code processes a frame and it takes a couple of seconds to execute. If I'm streaming from a camera, I will naturally drop frames and get a frame every couple of seconds, right? I want to simulate the same thing in replaying a video file.
Normally, when you call vidcap.read(), you get the next frame in the video. This essentially slows down the video and does not miss a frame. This is not like processing a live camera stream. Is there a way to process the video file and drop frames during processing like when processing the camera stream?
The solution that comes to my mind is to keep track of time myself and call vidcap.set(cv2.CAP_PROP_POS_MSEC, currentTime) before each vidcap.read(). Is this how I should do it, or is there a better way?
One approach is to keep track of the processing time and skip that amount of frames:
import cv2, time, math
# Capture saved video that is used as a stand-in for webcam
cap = cv2.VideoCapture('/home/stephen/Desktop/source_vids/ss(6,6)_id_146.MP4')
# Get the frames per second of the source video
fps = 120
# Iterate through video
while True:
# Record your start time for the frame
start = time.time()
# Read the frame
_, img = cap.read()
# Show the image
cv2.imshow('img', img)
# What ever processing that is going to slow things down should go here
k = cv2.waitKey(0)
if k == 27: break
# Calculate the time it took to process this frame
total = time.time() - start
# Print out how many frames to skip
print(total*fps)
# Skip the frames
for skip_frame in range(int(total*fps)): _, _ = cap.read()
cv2.destroyAllWindows()
This is probably better than nothing, but it does not correctly simulate the way that frames will be dropped. It appears that during processing, the webcam data is written to a buffer (until the buffer fills up). A better approach is to capture the video with a dummy process. This processor intensive dummy process will cause frames to be dropped:
import cv2, time, math
# Capture webcam
cap = cv2.VideoCapture(0)
# Create video writer
vid_writer = cv2.VideoWriter('/home/stephen/Desktop/drop_frames.avi',cv2.VideoWriter_fourcc('M','J','P','G'),30, (640,480))
# Iterate through video
while True:
# Read the frame
_, img = cap.read()
# Show the image
cv2.imshow('img', img)
k = cv2.waitKey(1)
if k == 27: break
# Do some processing to simulate your program
for x in range(400):
for y in range(40):
for i in range(2):
dummy = math.sqrt(i+img[x,y][0])
# Write the video frame
vid_writer.write(img)
cap.release()
cv2.destroyAllWindows()
I want to use video capture card to capture my screen display, and process the image by OpenCV/C++.
I have heard that there's some video capture card which is webcam like.(i.e. I can get the screen display by VideoCapture in OpenCV.)
Can someone tell me which video capture card should I buy?
Thanks !!!
I do not know if there some way to achieve that directly using OpenCV. However, a simple workaround could be like this:
Using this software you can create new webcam that stream your screen: https://sparkosoft.com/how-to-stream-desktop-as-webcam-video
Using OpenCV you can start capture the stream using this code:
cv::VideoCapture cap;
if(!cap.open(0)) // Use the new webcam Id instead of 0
return 0;
while(true){
cv::Mat frame;
cap >> frame;
if(frame.empty()) break;
cv::imshow("Screen", frame);
if( waitKey(10) == 27 ) break;
}
return 0;
I don't know if this helps now. But i found a way using opencv.
In linux and python, we achieve this using the following piece of code.
import cv2
cap = cv2.VideoCapture('/dev/video0')
This is probably an open-ended question. I have written an opencv application that captures feed from two external cameras connected to the computer. The capture from both the cameras runs parallely on 2 different threads. This recorder module writes the frames to a video file which is later processed. The following code sits inside each thread function:
CvCapture *capture =cvCaptureFromCAM(indexOfCamera);
if(!capture) return;
CvSize sz =cvGetSize(cvQueryFrame(capture));
cvNamedWindow("src");
CvVideoWriter *writer =cvCreateVideoWriter((char*) p, CV_FOURCC('L','A','G','S'), 20, sz);
if( !writer ) {
cvReleaseCapture( &capture );
return;
}
IplImage *frame;
int frameCounter =0;
while(true){
QueryPerformanceCounter(&sideCamCounter);
frame =cvQueryFrame(capture);
if(!frame)break;
//Store timestamp of frame somewhere
cvShowImage("src", frame);
cvWriteFrame(writer, frame);
int c=cvWaitKey(1);
if((char)c ==27)break;
++frameCounter;
}
cvReleaseVideoWriter(&writer);
cvReleaseCapture(&capture);
cvDestroyAllWindows();
The two cameras I am using are: A - Microsoft hd-6000 lifecam for notebooks and B - Logitech sphere AF webcam. Camera A captures at around 16-20 fps(reaches upto 30 fps during a few recordings) and camera B captures at around 10-12 fps.
I need a faster capture rate to be able to capture real-time motion. I understand I will be limited by the camera's capture speed/rate but apart from that, what other factors will affect the capture rate - e.g. load on the system(Memory and CPU), the API's used ? I am open to explore options. Thanks.
Try to set different camera properties - http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html#videocapture-set, probably the most interesting for you will be... FPS :) Note that it's not always working fine ( How to set camera FPS in OpenCV? CV_CAP_PROP_FPS is a fake ), but give it a chance, maybe it will help you. Also you may try to set smaller image resolution.
If you don't have to - don't display image.
You may try to grab frames in one thread and process in another.
Connect cameras directly to your computer - don't use USB hub.
the API's used
I don't think it will help, but if you want you may try to use different API - OpenCV on Mac is not opening USB web camera