I purchased an IP camera, and I was planning to use its video stream as input for OpenCV (with Python) to run some machine learning algorithms on the video frames.
I opened port 554 for RTSP on my router, and now I'm trying to access the stream by using (11 identifies the main video stream):
cap = cv2.VideoCapture("rtsp://user_name:password#camera_IP:554/11")
while(True):
ret, frame = cap.read()
...
It works without any problems from within the local network, but not from outside...in this case, frame is returned as a 'NoneType' object.
In the camera settings, ports 80 and 1935 are indicated for HTTP and RTMP, respectively. I tried to use them as well, but without any success.
If I simply open the camera IP in a browser, I get to the configuration page, and there I have the possibility to watch the live stream. It's embedded in a Flash object, and I'm trying to figure out if I can extract the video stream URL from it.
I viewed the page source, and there seems to be a reference to the source of the stream:
flashvars="&src=rtmp://camera_IP:1935/flash/11:YWRtaW46YWRtaW4=&
but I wasn't able to use it to fetch the stream in OpenCV.
Any suggestion, or should I just go for another camera?
Related
How do i stream live video from my raspberry pi to my phone on vlc?
I am aware of tools such as raspivid, but that only gives streams the raw output of the camera, I want to process the frames using opencv, add a few text boxes, and facial recognition, etc...
EDIT: I think i was unclear about what i wanted.
I want to capture video from a camera to a raspberry pi, process it, add boxes, from YOLO/PyTesseract, etc, and stream all those processed frames in real time to my phone's VLC client
You definitely can do this using opencv.
cap = cv2.VideoCapture(cam, cv2.CAP_DSHOW)
cap = cv2.VideoCapture(cam)
test, frame = cap.read()
process the frame
I didnt really written the whole code but i am confidently it works if you do this. I assume you webcam is connected to PC or another raspberry pi.
Capture the image (referring to code above), write and established socket connection and send the image over to raspberry pi.
At your raspberry pi receive the image from socket connection, do whatever processing you need and send through VLC stream. You may refer to this link
Your phone can access the image using http:ip:port
I have a Teledyne Dalsa Genie Nano XL camera : connecting it to the PC it gets assigned the following IP Address: 192.168.0.20
How do I find or setup the URL Video Stream for the camera in order to access its video stream through standard opencv instruction cap=cv2.VideoCapture('url')?
Any help will be highly appreciated
I assume you are trying to stream from an IP camera via rtsp. So, you can achieve it by this line of code:
Python version:
cap = cv2.VideoCapture('rtsp://admin:admin#192.168.0.20:554/stream1 latency=0')
C++ version:
cv::VideoCapture cap("rtsp://admin:admin#192.168.0.20:554/stream1 latency=0");
Here, the first admin indicates the username used to connect to your ip camera, and the second corresponds to password. By default, rtsp connection uses 554 port, but you may refer to your camera document to double-check it.
The string :554/stream1 varies depending on your camera brand. So you should check your manual for rtsp connection string.
The parameter latency=0 means you want to stream from camera without any delay. By default, rtsp connection creates a latency for buffer (something like 2-5 seconds), and this leads to some delay from actual content.
I have a raspberry pi setup with the uv4l driver and the native uv4l-WebRTC module. So far, I can see the video stream work fine on my browser, but what I want to do now is to be able to simultaneously stream the video to the browser and also pass some of the frames to my opencv-python program.
I was able to test if I can get some data on a video by using the following python code:
import numpy as np
import cv2
imageMat = np.array((3,4,3), np.uint8)
cap = cv2.VideoCapture()
cap.open('https://<IP ADDRESS:8080>/stream/video.mjpeg')
cap.read(imageMat)
which works if I put the URL in the sample code above on my browser. This URL is provided by the people who made the uv4l driver, but the problem is that I actually want to be able to use my custom webpage's video instead of the one being streamed from this default URL.
I've seen from other posts that I can pass the frames by drawing them on a
canvas element and then turning this into a Blob and then sending it over a websocket, but this would mean that I have to open another websocket (using python this time) but I'm not too sure if this is the correct approach. I thought that by using UV4L, I can easily obtain the frames while still be able to stream the video.
I have a multi-cast UDP Video stream that I need my OPenCV (Emgu ) 2.4.x app to capture and process ("client").
On the client, I can capture the stream using VLC (udp://xx.yy.zz.aaa:1234, However the my app fails to capture this udp stream. My code is quite simple (
Capture cap = new Capture ("udp://#212.1.1.1:1234");
p.s. I have tried with and 2/o the # also tried rtp on that address. No luck :-/
Does OpenCV directly allow "capture" of UDP streams? or do I need to run VLC on the client to re-stream the video as rtp or http or some other....?
Thanks.
I finally figured this out and sharing in the hope that might help others,
Capture cap = new Capture ("udp://#212.1.1.1:1234");
don't forget the # symbol!
the capture is successfully created on the UDP Stream, however accessing the capture properties causes it to exception out and causes the error.
Long story short, the UDP stream does not appear to stream the device properties so you might need to obtain that elsewhere or code it in.
On other thing of note, that since the FPS (frames per sec) is unreliable, if not outright incorrect, you might need to make the FPS adjustable, especially if you are polling the stream in a loop.
HTH
IplImage* frame;
CvCapture* pCapture;
pCapture = cvCaptureFromFile("udp://ip:port/path");
frame = cvQueryFrame(pCapture);
This will also do the job in case you don't have videoInput libraries
I was wondering if I can use an HTTP protocol to acquire an image stream from an RTSP camera? I am currently using VLC Media ActiveX Plugin to connect to and view the RTSP stream, but I would like to eliminate the ActiveX control and move to a more raw level of image acquisition. I recall seeing somewhere that it's possible to get these images using HTTP. I'd like to use Indy TIdHTTP component to connect to the camera and acquire the image. I'm also assuming this would need some sort of speed control, such as a delay in-between requests. However, it's also my understanding that these RTSP cameras have pre-defined frame rates, which using the standard RTSP protocol are supposed to follow.
many cameras will allow you to grab screenshots with a URL that might look like:
http://user:password#camera/snapshot.jpg
for a proper stream, you would need to use RTSP (there are Delphi RTSP clients), tunnelling over HTTP if your device supports the application/x-rtsp-tunnelled content type, or another stream your device supports.