I am running a pose tracking application on Colab (with mediapipe). It does not show a continuous video, instead it runs my video frame by frame, showing them in succession in the output section below my block of code. So it becomes very slow at processing a single video and I have the output section full of frames, so I have to scroll a lot of stuff to check the top or the bottom of the output section. The goal is to have a video stream like a normal linux application on my PC.
This is the main() file of my application
cap = cv2.VideoCapture('1500_doha.mp4')
pTime = 0
detector = poseDetector()
while cap.isOpened():
success, img = cap.read()
width,height, c=img.shape
img = detector.findPose(img)
lmList = detector.findPosition(img, draw=False)
angle=detector.findAngle(img, 11, 13, 15) #attenzione, cambia braccio ogni tanto!!
cTime = time.time()
fps = 1 / (cTime - pTime)
pTime = cTime
text=cv2.putText(img, str(int(fps)), (70, 50), cv2.FONT_HERSHEY_PLAIN, 3,
(255, 0, 0), 3)
cv2_imshow(img)
cv2.waitKey(10)
The problem is clearly in cv2_imshow(), because if I run a YOLO-V4 box detector I don't need this command and I obtain a continuous stream. Have you any suggestions? Is there already a solution online?
Here you find part of the output box of my google colab.
Here you find the complete file https://colab.research.google.com/drive/1uEWiCGh8XY5DwalAzIe0PpzYkvDNtXID#scrollTo=HPF2oi7ydpdV
Related
I have an RTP/RTSP stream that's running at 25fps, as verified by ffprobe -i <URI>. Also, VLC plays back the RTSP stream at a real-time rate, but doesn't show me the FPS in the Media Information window.
However, when I use OpenCV 4.1.1.26 to retrieve the input stream's frame rate, it is giving me a response of 90000.0.
Question: How can I use OpenCV to probe for the correct frame rate of the RTSP stream? What would cause it to report 90000.0 instead of 25?
Here's my Python function to retrieve the frame rate:
import cv2
vid : cv2.VideoCapture = cv2.VideoCapture('rtsp://192.168.1.10/cam1/mpeg4')
def get_framerate(video: cv2.VideoCapture):
fps = video.get(cv2.CAP_PROP_FPS)
print('FPS is {0}'.format(fps))
get_framerate(vid)
MacOS Catalina
Python 3.7.4
I hope this helps you somehow. It is a simple calculator that takes cont captures and measure the beginning and the ending time. Then with the rule of three, i converted it to fps.
Related to you second question i read here that it could be due to bad installation. Also, you can check that your camera is working properly by printing ret variable. If it is true then you should be able to see the fps, if it is false then you can have an unpredictable result.
cv2.imshow() and key = cv2.waitKey(1) should be commented as it adds ping/delay resulting in bad measurement.
I post this as a comment because i do not have enough reputation points.
img = cv2.VideoCapture('rtsp://192.168.1.10/cam1/mpeg4')
while True:
if cont == 50:
a = datetime.now() - start
b = (a.seconds * 10e6 + a.microseconds)
print((a.seconds * 10e6 + a.microseconds), "fps = ", (50 * 10e6)/ b)
break
ret, frame = img.read()
# Comment for best test
cv2.imshow('fer', frame)
key = cv2.waitKey(1)
if key == ord('q'):
break
cont+=1
img.release()
cv2.destroyAllWindows()`
I have a Garmin VIRB XE camera, and want to get a live stream and interact with camera like getting GPS data. I could get the live stream by VLC media player and also could post commands to the camera by CURL from windows command prompt, but i can't get the live stream using OpenCV and interact with camera using requests library in python.
I can get a live stream from "rtsp://192.168.1.35/livePreviewStream" using VLC media player's network streaming feature, also could interact with camera, for example by "curl --data "{\"command\":\"startRecording\"}" http://192.168.1.35/virb" from command prompt I could start the recording, but the following codes are not working.
'''
import simplejson
import requests
url='http://192.168.1.37:80/virb'
data = {'command':'startRecording'}
r=requests.post(url, simplejson.dumps(data))
'''
or
'''
import cv2
capture = cv2.VideoCapture("rtsp://192.168.1.35/livePreviewStream")
'''
The post return the error
"ProxyError: HTTPConnectionPool(host='127.0.0.1', port=8000): Max retries exceeded with url: http://192.168.1.37:80/virb (Caused by ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response')))".
Also the capture could not get any frames.
Since you have already confirmed that your RTSP link works with VLC player, here's a IP camera video streaming widget using OpenCV and cv2.VideoCapture.read(). This implementation uses threading for obtaining frames in a different thread since read() is a blocking operation. By putting this operation into a separate that that just focuses on obtaining frames, it improves performance by I/O latency reduction. I used my own IP camera RTSP stream link. Change stream_link to your own IP camera link.
Depending on your IP camera, your RTSP stream link will vary, here's an example of mine:
rtsp://username:password#192.168.1.49:554/cam/realmonitor?channel=1&subtype=0
rtsp://username:password#192.168.1.45/axis-media/media.amp
Code
from threading import Thread
import cv2
class VideoStreamWidget(object):
def __init__(self, src=0):
# Create a VideoCapture object
self.capture = cv2.VideoCapture(src)
# Start the thread to read frames from the video stream
self.thread = Thread(target=self.update, args=())
self.thread.daemon = True
self.thread.start()
def update(self):
# Read the next frame from the stream in a different thread
while True:
if self.capture.isOpened():
(self.status, self.frame) = self.capture.read()
def show_frame(self):
# Display frames in main program
if self.status:
self.frame = self.maintain_aspect_ratio_resize(self.frame, width=600)
cv2.imshow('IP Camera Video Streaming', self.frame)
# Press Q on keyboard to stop recording
key = cv2.waitKey(1)
if key == ord('q'):
self.capture.release()
cv2.destroyAllWindows()
exit(1)
# Resizes a image and maintains aspect ratio
def maintain_aspect_ratio_resize(self, image, width=None, height=None, inter=cv2.INTER_AREA):
# Grab the image size and initialize dimensions
dim = None
(h, w) = image.shape[:2]
# Return original image if no need to resize
if width is None and height is None:
return image
# We are resizing height if width is none
if width is None:
# Calculate the ratio of the height and construct the dimensions
r = height / float(h)
dim = (int(w * r), height)
# We are resizing width if height is none
else:
# Calculate the ratio of the 0idth and construct the dimensions
r = width / float(w)
dim = (width, int(h * r))
# Return the resized image
return cv2.resize(image, dim, interpolation=inter)
if __name__ == '__main__':
stream_link = 'your stream link!'
video_stream_widget = VideoStreamWidget(stream_link)
while True:
try:
video_stream_widget.show_frame()
except AttributeError:
pass
Python 3.5.2, anaconda 4.2.0 on Windows 10.
OpenCV installed from conda, version 3.1.0.
I'm trying to process a video file by opening it, transforming each frame, and putting the result into a new video file. The output file is created, but the size is about 800 bytes and its empty. The input file has ~4,000 frames and it's about 150 MB.
Here's my code, which follows the guide on the OpenCV documentation pretty closely.
import cv2
import progressbar
# preprocess video
# args.input is a valid file name
outname = 'foo.mp4'
cap = cv2.VideoCapture(args.input)
codec = int(cap.get(cv2.CAP_PROP_FOURCC))
framerate = app_config.camera.framerate #240
size = (app_config.camera.width, app_config.camera.height) #1080 x 720
vw = cv2.VideoWriter(filename=outname, fourcc=codec, fps=framerate, frameSize=size, isColor=False)
curframe = 0
with progressbar.ProgressBar(min_value=0, max_value=int(cap.get(cv2.CAP_PROP_FRAME_COUNT))) as pb:
while cap.isOpened():
ret, frame = cap.read()
if ret:
#update the progress bar
curframe += 1
pb.update(curframe)
# convert to greyscale
grey = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# invert colors
inverted = cv2.bitwise_not(grey)
vw.write(inverted)
#cv2.imshow('right', right)
#if cv2.waitKey(1) & 0xFF == ord('q'):
# break
else:
break
cap.release()
vw.release()
cv2.destroyAllWindows()
I receive the following error:
OpenCV: FFMPEG: tag 0x7634706d/'mp4v' is not supported with codec id 13 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x00000020/' ???'
I receive similar errors (as well as a warning that I have an incorrect environment variable for h.264 library path) if i try to set codec = cv2.VideoWriter_fourcc(*'H264').
Ensure that the dimensions of inverted match the dimensions of the size parameter in the videoWriter definition.
Also use 'M','P','4','V' codec with the .mp4 container.
So I set
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FPS, 60)
I also tried integer 5 instead of cv2.CAP_PROP_FPS. Neverteless, frame rate doesn't change. I get 30 when I
print(cap.get(cv2.CAP_PROP_FPS))
Why?
The problem maybe is with the codec of the camera stream and not with the FPS itself, for example, if your camera only supports YUYV it is probably that you could only work with some specific FPS, try with the app guvcview to check this in a GUI.
Try to change the codec to MJPG and then change the FPS using CAP_PROP_FPS. I'm using a Logitech C922 pro and this works for me to configure 1080p and 30fps, if you have other camera probably yu need to use a lower resolution to achieve 30fps:
import cv2 as cv
def decode_fourcc(v):
v = int(v)
return "".join([chr((v >> 8 * i) & 0xFF) for i in range(4)])
def setfourccmjpg(cap):
oldfourcc = decode_fourcc(cap.get(cv.CAP_PROP_FOURCC))
codec = cv.VideoWriter_fourcc(*'MJPG')
res=cap.set(cv.CAP_PROP_FOURCC,codec)
if res:
print("codec in ",decode_fourcc(cap.get(cv.CAP_PROP_FOURCC)))
else:
print("error, codec in ",decode_fourcc(cap.get(cv.CAP_PROP_FOURCC)))
cap = cv.VideoCapture(CAMERANUM)
cu.setfourccmjpg(cap)
w=1920
h=1080
fps=30
res1=cap.set(cv.CAP_PROP_FRAME_WIDTH,w)
res2=cap.set(cv.CAP_PROP_FRAME_HEIGHT,h)
res3=cap.set(cv.CAP_PROP_FPS,fps)
then resume your normal video capture polling loop.
Not all openCV parameters are supported by all cameras from an opencv standpoint. Each camera has a different set of parameters that need to be set. You need to find out what parameters are supported by your camera...
I am writing a small script (in Python) that generates and updates a running average of a camera feed. When I call cv.RunningAvg it returns:
cv2.error: func != 0
Where am I stumbling in implementing cv.RunningAvg? Script follows:
import cv
feed = cv.CaptureFromCAM(0)
frame = cv.QueryFrame(feed)
moving_average = cv.QueryFrame(feed)
cv.NamedWindow('live', cv.CV_WINDOW_AUTOSIZE)
def loop():
frame = cv.QueryFrame(feed)
cv.ShowImage('live', frame)
c = cv.WaitKey(10)
cv.RunningAvg(frame, moving_average, 0.020, None)
while True:
loop()
I am not sure about the error, but check out the documentation for cv.RunningAvg
It says destination should be 32 or 64-bit floating point.
So I made a small correction in your code and it works. I created a 32-bit floating point image to store running average values, then another 8 bit image so that I can show running average image :
import cv2.cv as cv
feed = cv.CaptureFromCAM(0)
frame = cv.QueryFrame(feed)
moving_average = cv.CreateImage(cv.GetSize(frame),32,3) # image to store running avg
avg_show = cv.CreateImage(cv.GetSize(frame),8,3) # image to show running avg
def loop():
frame = cv.QueryFrame(feed)
c = cv.WaitKey(10)
cv.RunningAvg(frame, moving_average, 0.1, None)
cv.ConvertScaleAbs(moving_average,avg_show) # converting back to 8-bit to show
cv.ShowImage('live', frame)
cv.ShowImage('avg',avg_show)
while True:
loop()
cv.DestroyAllWindows()
Now see the result :
At a particular instant, I saved a frame and its corresponding running average frame.
Original frame :
You can see the obstacle (my hand) blocks the objects in behind.
Now running average frame :
It almost removed my hand and shows objects in background.
That is how it is a good tool for background subtraction.
One more example from a typical traffic video :
You can see more details and samples here : http://opencvpython.blogspot.com/2012/07/background-extraction-using-running.html