I'm trying to create an overlay of two security camera views onto one video stream, and streaming it out as RTSP protocol. There are two IP camera streams on input, and only one video stream combining both views in overlay - as output.
To create overlay effect and to stream video out, I use VLC player v3.0.6 on Windows 10. I run it from command line to setup everything (correct overlay, accepting input streams and creating output stream). I can receive my inputs and create overlay, and then either display it on screen, or stream it out using http protocol. HTTP steam works great, I can open it on another computer and watch it. Hovewer I cannot change output from HTTP to RTSP and make it work.
This is a VLM config file, setting inputs and outputs. This one outputs HTTP stream.
del all
new channel1 broadcast enabled
setup channel1 input rtsp://xxx:xxx#192.168.xx.xx/profile2/media.smp
setup channel1 output #duplicate{dst=mosaic-bridge{id=1,height=720},select=video}
new channel2 broadcast enabled
setup channel2 input rtsp://xxx:xxx#192.168.x.x/profile2/media.smp
setup channel2 output #duplicate{dst=mosaic-bridge{id=4,height=340},select=video}
new background broadcast enabled
setup background input "file:///C:\Program Files\VideoLAN\VLC\pic.jpg"
setup background option image-duration=-1
setup background option image-fps=10
setup background option mosaic-width=1280
setup background option mosaic-height=720
setup background option mosaic-keep-picture=1
setup background output #transcode{sfilter=mosaic,vcodec=mpeg,vb=2000,fps=10}:bridge-in{delay=0,id-offset=0}:standard{access=http,mux=ogg,dst=192.168.xx.xx:18554}
control channel1 play
control channel2 play
control background play
To run it, I call VLC using this command:
vlc "--vlm-conf=C:\Projekty\mosaic\mosaic4.vlm" "--clock-jitter=0" "--mosaic-width=1280" "--mosaic-height=720" "--mosaic-keep-picture" "--mosaic-row=2" "--mosaic-cols=2" "--mosaic-position=1" "--mosaic-order=1,2,3,4" "--ttl=12" "--udp-caching=800" --verbose=2
It sets the mosaic view and resolution.
Now, the problem lies in the VLM file, when setting output. I use :standard module for output, but this module doesn't support RTSP.
Ok, let's try and let VLC configure everything for me. There is option of streaming using regular VLC GUI. You choose what to steam (file/your screen/single input stream), then you choose output format, and that's it. At the end of process, VLC even shows you the commands it uses to stream. It looks like this:
:sout=#transcode{vcodec=h264,vb=56,venc=x264{profile=baseline},fps=12,scale=Automaticky,width=176,height=144,acodec=mp3,ab=24,channels=1,samplerate=44100,scodec=none}:rtp{sdp=rtsp://:8554/} :no-sout-all :sout-keep
It's a bunch of video transcoding settings, and then the output - :rtp{sdp=rtsp://:8554/} . But it works great, the other side receives working RTSP stream.
Naturally, I try to replace my :standard(http) module with this :rtp setting, but for some reason, it just doesn't work - the other side can't open stream.
setup background output #transcode{sfilter=mosaic,vcodec=mpeg,vb=2000,fps=10}:bridge-in{delay=0,id-offset=0}:rtp{sdp=rtsp://:8554/} :no-sout-all :sout-keep
Any suggestions? I can receive my streams, I can merge them together, I just can't get them out. VLC documentation doesn't help much at this point.
Any help would be greatly appreciated.
Related
I purchased an IP camera, and I was planning to use its video stream as input for OpenCV (with Python) to run some machine learning algorithms on the video frames.
I opened port 554 for RTSP on my router, and now I'm trying to access the stream by using (11 identifies the main video stream):
cap = cv2.VideoCapture("rtsp://user_name:password#camera_IP:554/11")
while(True):
ret, frame = cap.read()
...
It works without any problems from within the local network, but not from outside...in this case, frame is returned as a 'NoneType' object.
In the camera settings, ports 80 and 1935 are indicated for HTTP and RTMP, respectively. I tried to use them as well, but without any success.
If I simply open the camera IP in a browser, I get to the configuration page, and there I have the possibility to watch the live stream. It's embedded in a Flash object, and I'm trying to figure out if I can extract the video stream URL from it.
I viewed the page source, and there seems to be a reference to the source of the stream:
flashvars="&src=rtmp://camera_IP:1935/flash/11:YWRtaW46YWRtaW4=&
but I wasn't able to use it to fetch the stream in OpenCV.
Any suggestion, or should I just go for another camera?
I have a raspberry pi setup with the uv4l driver and the native uv4l-WebRTC module. So far, I can see the video stream work fine on my browser, but what I want to do now is to be able to simultaneously stream the video to the browser and also pass some of the frames to my opencv-python program.
I was able to test if I can get some data on a video by using the following python code:
import numpy as np
import cv2
imageMat = np.array((3,4,3), np.uint8)
cap = cv2.VideoCapture()
cap.open('https://<IP ADDRESS:8080>/stream/video.mjpeg')
cap.read(imageMat)
which works if I put the URL in the sample code above on my browser. This URL is provided by the people who made the uv4l driver, but the problem is that I actually want to be able to use my custom webpage's video instead of the one being streamed from this default URL.
I've seen from other posts that I can pass the frames by drawing them on a
canvas element and then turning this into a Blob and then sending it over a websocket, but this would mean that I have to open another websocket (using python this time) but I'm not too sure if this is the correct approach. I thought that by using UV4L, I can easily obtain the frames while still be able to stream the video.
I am using a proprietary RTSP server (I don't have access to the source code) running on a Linaro based embedded system. I am connecting to the device using WiFi and use VLC player to watch the stream. Every often, VLC player's window resizes to different sizes.
Is this a normal behavior in RTSP stream (resizing the video)?
-If yes, what is causing this change? Is it my WiFi bandwidth?
-If not, what are the suggested steps to find the root cause of this problem.
Thank you
Ahmad
Is this a normal behavior in RTSP stream (resizing the video)?
Yes, the RTSP DESCRIBE Request should give info about the resolution. (See this discussion)
-If yes, what is causing this change? Is it my WiFi bandwidth?
Most probably not. However I guess more info would be needed on your bandwidth and network setup.
-If not, what are the suggested steps to find the root cause of this problem.
Option 1: Try to disable (uncheck) VLC's preference to resize the interface to native video size, and see what happens.
Also see the following post over at superuser discussing about automatic resizing options
Option 2: Enable VLC's verbose mode (console log) and see what errors or messages come up. This often helps, and points into new directions to look for solutions.
Option 3: It could be a problem with how information is encoded in the stream concerning the resolution. You would need to get in touch with the vendor of your RTSP server software in order to dig deeper.
Open the VLC player press (Ctrl + P) or go to
Tools -> Prefrences -> Interface (look for below options)
Integrated video in interface [Check]
Resize interface to video size [UnCheck]
Must close and open again the VLC player
Not sure if this is something obvious or not. After creating an YouTube LiveBroadcast, binding that to a LiveStream with a specific CDN format (let's say "720p"), and transitioning the broadcast from "ready" to "live" ... how can I change the stream quality without having to create a new broadcast?
Trying to unbind the current stream - exception is returned, cannot unbind the stream.
Trying to bind broadcast to another stream - same exception as above.
In addition, after looking through the support pages for YouTube live streaming, it is suggested that "ingest settings cannot be modified after the broadcast has started" - it says nothing about the actual API not being able to support this, but it looks like a major limitation from somewhere deeper. I only thought it applies to the web Live Control room.
I need this functionality so that I can change the stream quality for when a user switches from WiFi to mobile data. Currently streaming RTMP data in another resolution that what the LiveStream CDN format is configured for, results in health errors and encoding artifacts on YouTube's side. As suggested by the support pages, creating a "1080p" live stream ("maximum expected resolution") should work, but when that stream is receiving a 720p or 480p stream, depending on whether it was started or not, it either doesn't start at all, or goes to a gray scene with high-pitch audio (my stream is sent correctly, since I can output it to a dozen more outputs, like MP4, FLV, and other RTMP servers).
Solution?
I have a DirectShow filter graph in my Delphi 6 application built with the DSPACK component library. The structure of the graph is as follows:
Custom push source audio filter
Sample Grabber
Tee Filter (but only when I turn on both the WAV File Writer and Renderer)
Renderer (preferred PC output device)
WAV File Writer
The Tee Filter is added to the graph only if I have both the Renderer and the WAV File Writer filters turned on. Otherwise I connect only the filter that is turned on directly to the Sample Grabber.
The audio is being delivered over a WiFi connected RTSP audio server that is streaming audio in real-time. If I don't turn on the Wav File Writer, the audio coming out my headphones has the typical pumping and occasional clicking sounds associated with an unbuffered audio stream. Strangely enough, as soon as I turn on the WAV File Writer filter the audio becomes smooth as glass.
I have the source code for the WAV File Writer and it basically handles the tasks of outputting the proper WAV file header when needed and writing the audio buffers as necessary, not much more than that. So I find it strange that turning it on smooths the incoming audio stream, especially since it is not upstream of the Renderer (filter) but instead is a peer filter hanging off the end of the Tee Filter alongside the Renderer.
Can anyone tell me what might be happening to make the audio delivery smooth out when when I turn on the File Writer filter? Does the Tee Filter do any inherent buffering? I want to duplicate the same mechanism so I can have smooth audio when the File Writer is not turned on. I'm trying to avoid adding my own buffering because I don't want to add any more delay to the real time audio stream than I have to.
If you have a live source and you can listen to it and the delivered audio at the same time, you may be able to tell whether adding File Writer introduces a delay, that may be accountable for the difference. Or there may be a change in size or the number of negotiated buffers in DecideBufferSize.
I would suggest introducing explicit buffering in your push filter, like adding an offset to media sample time-stamps. Inherent buffering in Tee filter may be not reliable. Variations in delivery time are inevitable.
A more sophisticated approach, if you need to run with minimal or no buffering, could be to stretch/compress the audio while preserving the pitch.