How to scale video processing from multiple cameras using OpenCV, Kafka & Docker - docker

We're trying to scale our Real Time Video Processing system to support over a hundred cameras. The system is majorly built in Python.
We're polling RTSP Camera streams using OpenCV and planning to deliver them using Kafka Producer. This part of the system is called Poller or Stream Producer.
The Cameras would be configured using a web interface and the Poller shall receive start/stop messages for any camera along with other details such as RTSP stream URL. This shall be done using Celery. For each start request, the Poller would create a new process for that camera and poll the stream using cv2.VideoCapture().read(). Captured frames would be sent over to Kafka tagged with camera ID and timestamped.
We're running all our components in Docker containers and intend to scale horizontally.
How can we scale Poller for a large (over a hundred or even more) number of cameras and effectively balance the camera streams across multiple instances of the Poller. Is there a way to achieve it using CPU/Memory metrics, or a more standard approach that we can follow for Docker.

Related

Output video to HDMI without creating a window with imshow?

I have created an object detection, identification, and tracking application that runs and an NVIDIA Jetson TX2. The Jetson is mounted on a drone. The processed video from the application needs to be sent to the HDMI port which is connected to a digital radio that displays the video on a controller. I have spent some time looking at how to do this and have not been able to find any materials or examples on the subject. Is this possible?

Best practices for Kafka streams

We have a predict service written in python to provide the Machine Learning service, you send it a set of data, and it will give the Anomaly Detection or Predict and so on.
I want to use Kafka streams to process the real-time data.
There are two ways to select:
Kafka streams jobs only complete the ETL function: load data, and do simple transform and save data to Elastic Search. And then start a timer periodically load data from ES and call predict service to compute and save result back to ES.
Kafka streams jobs do all the thing besides the ETL, when Kafka streams jobs complete the ETL and then send the data to predict service, and save the compute result to Kafka, and a consumer will forward the result from Kafka to ES.
I think the second way is more real-time, but I don't know it's a good idea to do so much predict tasks in streaming jobs.
Is there any common patterns or advice for such application?
Yes, I'd opt for the second option as well.
What you can do is to use Kafka as the data pipeline between your ML-Training module and your Prediction module. These modules could be very well implemented in Kafka Streams.
Take a look on the diagram below:

How to stress-test HTTP Live Streaming

We built an youtube-like Rails application that serves videos using HTTP Live Streaming which are hosted on our company's S3-like (actually Ceph Object Gateway S3 API) cloud service.
It's the first public application on that storage service and we would like to know how much concurrent viewers it can handle beforehand.
We know that the network connection (10Gbps) will become the bottle neck at a certain stage, but we have no idea how much load the actual storage cloud service is able to handle.
How would you stress-test the HTTP Live Streaming?
Is something similar to this (UDP) suggestion an option in this (TCP) case?
You can use either a JMeter SAAS or cloud servers to overcome the network issue, and for JMeter you can use this commercial plugin which simulates realistically the Players behaviour and give useful metrics:
http://www.ubik-ingenierie.com/blog/easy-and-realistic-load-testing-of-http-live-stream-hls-with-apache-jmeter/
Metrics provided by plugin are:
Buffer fill time (time it took to start playin)
Lag Time (How many seconds play paused)
Lag Ratio (waiting time over watching time)
Disclaimer : We are behind the development of this solution
If you're testing HTTP streams you might be able to test it using JMeter though you'd probably need a hosted JMeter solution to create enough traffic.
I'm not sure if you'd be able to get any helpful response time info, but you would at least be able to easily create and ramp up the load.
Let me know if you need help with the JMeter side.

Can a webcam stream directly to an RTMP Flash Server without a computer?

I'm trying to figure out if it's possible to stream directly from a webcam (IP Camera / Network Camera) to an RTMP Flash Server.
The purpose is to be able to set up a camera at a location and be able to stream directly from it to streaming services such as DaCast or justin.tv without the need to have it hooked up to a computer that does the encoding. All it would need is a wireless connection.
Technically the camera would have to have it's own encoder (H.264) and a place where you can configure the Flash Media Server to stream to within it's built-in configuration.
Parts of this answer comes from: AskUbuntu: Security camera system server
Certain IP Cameras, in several flavors, brand names and models, provide their own web page for setup/preview/monitor, from which you can extract the portions of code that you can use in your own project in a website.
You don't say what do you have in mind by streaming to justin.tv or other web based streaming service but if what you wish to achieve is to get the benefit of the popularity of the web based streaming service itself to gain audience, then this solution IS NOT FOR YOU.
But if you are using a web based streaming service just to gather the portions of code in order to be used in a customized website of your own, then you can use the code provided by your own IP Camera.
As far as I know, the majority of the IP cameras, as those shown in
this virtual shop, starting from $ 945.00 Mexican Pesos (almost
100 US Dollars), and this D-Link DC-900 (the majority of them
tested by me) resolves the motion detection, scheduled recording and
remote control by itself (there are just a few which features 360°
movement, remote controlled).
How to reach your cameras from outside is as easy as getting a Dynamic
Domain Name Service and to use it in your modem/router or, if you have
fixed IP then you don't have a problem, you will also be in need to
route the specific ports to the cameras and make the cameras respond
to the petitions of a specific port.
Everything can be monitored/controlled via web browser, like in this
example of my security system which is embedding 3 cameras (1 of them
remote controlled) in a single web page. (blurred where needed for
privacy).
The remote controlled camera is the one shown here, with two-way
audio (yes, you can speak to people close to the camera), wireless and
infra-red night vision. (Sorry, I don't sell these cameras but I
purchased over there in Mexico City.)
In the examples provided here I am using the portions of code of the original IP Camera web page monitoring system, as shown in the next picture:
Original DCS-900 Camera's Web Based Application
So I think this can be done directly from the IP Camera web application but as I mentioned before, if what you wish is to get advantage on the web based streaming service (for getting audience), you may wish to consider a different choice.
Good luck!
You can use CamStreamer RTMP client - An application which is running directly in Axis IP camera. The camera with CamStreamer pushes the video to any RTMP streaming service (LiveStream, uStream, YouTube Live,...).

How do I install OpenCV on Windows Azure?

I am a beginner with Windows Azure and I want to make an app which does facial recognition on a video stream. Hence I need to install OpenCV (C++ Library).
How do I do that? And how do I get the video stream from the client app? (I am in control of the client app as well).
If the library simply needs to be on the path for your application to pick it up, then just add it as an item in the project you're deploying, and it will get uploaded up to Azure, and deployed alongside your application.
If some commands are required to install it, you can use startup tasks.
As for the video stream, you can open a socket (using a TCP endpoint) and stream the video up to an azure instance that way. That's probably the most efficient way of doing it if you want real time video processing. If you want to record the video and upload it, look at using blob storage. You can then use a message queue to signal to the worker, that there is a video waiting to be processed.

Resources