OpenCV output to MJPEG stream - opencv

My application uses CvCapture to gather the image and uses Matrices to perform the processing. The program currently runs headless. How do I write a Matrix to an JPEG stream so I can stream it over HTTP?
Can I put the MJPEG stream in nGinX WWW root and serve it through there or do I need to use a specific streaming application?

Related

How to scale video processing from multiple cameras using OpenCV, Kafka & Docker

We're trying to scale our Real Time Video Processing system to support over a hundred cameras. The system is majorly built in Python.
We're polling RTSP Camera streams using OpenCV and planning to deliver them using Kafka Producer. This part of the system is called Poller or Stream Producer.
The Cameras would be configured using a web interface and the Poller shall receive start/stop messages for any camera along with other details such as RTSP stream URL. This shall be done using Celery. For each start request, the Poller would create a new process for that camera and poll the stream using cv2.VideoCapture().read(). Captured frames would be sent over to Kafka tagged with camera ID and timestamped.
We're running all our components in Docker containers and intend to scale horizontally.
How can we scale Poller for a large (over a hundred or even more) number of cameras and effectively balance the camera streams across multiple instances of the Poller. Is there a way to achieve it using CPU/Memory metrics, or a more standard approach that we can follow for Docker.

FFmpeg: UDP statistics

I'm using FFmpeg in my iOS application to read an RTSP (over UDP) stream. VLC offers some media information -> statistics about the stream that is being played. The number of Video Frames lost is what I'm interested in.
My question: is there a way to get these statistics (frames lost) with FFmpeg while reading from an UDP stream?
Thanks.
RTCP is used to collect statistics. Check if ffmpeg libs provide some API to access RTCP information

Use VLC to stream RTSP feed as HTTP Live Stream

I have a really high quality RTSP feed coming into a windows server. I'm attempting to use VLC to restream it as Http Live Streaming.
Does anyone know whether it is possible to establish this stream through VLC's graphic user interface as opposed to the command line? If so, how?
The examples I've found so far (on here and elsewhere) have all been command line examples and none of them have worked at all.
I would love to hear from anyone who has actually accomplished a successful restream of RTSP to an http live stream using a windows server. Incidentally, I already have the website set up to serve the result, but I can't get the stream to write the .ts files regardless of what I've tried.
I'm stumped. Thanks.
just look this command for example:
vlc -I dummy rtsp://ip:port/blablabla--sout '#transcode{vcodec=h264,fps=20,vb=512,scale=1,acodec=none,venc=x264{aud,profile=high,level=60,keyint=15,bframes=0,ref=1,nocabac}}:duplicate{dst=std{access=livehttp{seglen=10,delsegs=true,numsegs=10,index=/var/www/live/mystream.m3u8,index-url=http://ip/live/mystream-########.ts},mux=ts{use-key-frames},dst=/var/www/live/mystream-########.ts},dst=std{access=http,mux=ts,dst=:8082/video.mp4}}'

How do I install OpenCV on Windows Azure?

I am a beginner with Windows Azure and I want to make an app which does facial recognition on a video stream. Hence I need to install OpenCV (C++ Library).
How do I do that? And how do I get the video stream from the client app? (I am in control of the client app as well).
If the library simply needs to be on the path for your application to pick it up, then just add it as an item in the project you're deploying, and it will get uploaded up to Azure, and deployed alongside your application.
If some commands are required to install it, you can use startup tasks.
As for the video stream, you can open a socket (using a TCP endpoint) and stream the video up to an azure instance that way. That's probably the most efficient way of doing it if you want real time video processing. If you want to record the video and upload it, look at using blob storage. You can then use a message queue to signal to the worker, that there is a video waiting to be processed.

Fake video streaming

I am building an iOS App which displays video streams from a somewhat complex backend. Now while developing I want to be able to have some sort of test video stream, which I can use. Ideally this would also work without internet connection.
The video stream could show for example the current time or just a simple animation. What would be a good way of doing this on a Mac without having to install a whole suite of tools.
On you Mac you can setup a webserver or streaming server to provide you with a constant video stream for testing purposes. You won't need Internet access. You will, of course, need to ensure that the OSX firewall is either disabled or allows requests to the ports (80, most likely).
Two simple approaches I can see:
Wowza MPEG-TS stream of the Webcam on your mac
Install Wowza Media Server; developer license is free
Configure a basic applicaiton with MPEG-TS streaming
Use an encoding applicaiton, like Flash Media Live Encoder (free), Wirecast (demo version free), or some other software and start streaming from your webcam to the WMS
alternatively, with a bit more effort, you could setup Wowza to stream a file in a loop
be sure to get the codec settings correct
M3U8+MPEG-TS static files over plain HTTP
Simple Setup a basic webserver (lighttpd, Apache httpd, Apache Tomcat, whatever) to server static files
Whip up an M3U8 file to first point to a .ts media file, and then secondly back to itself
Have a look at MPEG-TS/M3U8 live stuff to work out the details. You'll need a properly segmented video file to start with.

Resources