I have created an object detection, identification, and tracking application that runs and an NVIDIA Jetson TX2. The Jetson is mounted on a drone. The processed video from the application needs to be sent to the HDMI port which is connected to a digital radio that displays the video on a controller. I have spent some time looking at how to do this and have not been able to find any materials or examples on the subject. Is this possible?
Related
We're trying to scale our Real Time Video Processing system to support over a hundred cameras. The system is majorly built in Python.
We're polling RTSP Camera streams using OpenCV and planning to deliver them using Kafka Producer. This part of the system is called Poller or Stream Producer.
The Cameras would be configured using a web interface and the Poller shall receive start/stop messages for any camera along with other details such as RTSP stream URL. This shall be done using Celery. For each start request, the Poller would create a new process for that camera and poll the stream using cv2.VideoCapture().read(). Captured frames would be sent over to Kafka tagged with camera ID and timestamped.
We're running all our components in Docker containers and intend to scale horizontally.
How can we scale Poller for a large (over a hundred or even more) number of cameras and effectively balance the camera streams across multiple instances of the Poller. Is there a way to achieve it using CPU/Memory metrics, or a more standard approach that we can follow for Docker.
I would like to connect 2 usb webcams to a RaspberryPI and be able to get at least 1920 x 1080 frames at 10 fps using OpenCV. Has anyone done this and knows if this is possible? I am worried that the PI has only 1 usb bus?? (usb2) and might get a usb bandwidth problem.
Currently I am using an Odroid and it has a usb2 and usb3 bus so I can connect 1 camera to each without any problemo..
What i have found in the past with this is no matter what you select using OpenCV for bandwidth options the cameras try to take up as much bandwidth as they want.
This has led to multiple cameras on a single USB port being a no-no.
That being said, this will depend on your camera and is very likely worth testing. I regularly use HD-3000 Microsoft cameras and they do not like working on the same port, even on my beefy i7 laptop. This is because the limitation is in the USB Host Bandwidth and not processing power etc.
I have had a similar development process to you inthe past though, and selected an Odroid XU4 because it had the multiple USB hosts for the cameras. It also means you have a metric tonne more processing power available and more importantly can buy and use the on-board chip if you want to create a custom electronics design.
I'm doing an image processing project on Zedboard Zynq evaluation board, using the FPGA built on it. I have written the image processing block using HLS and created the IP with both input and output as AXI4 streams with width 8.
How do I read a JPEG image on my PC and send it as an AXI4 stream to this IP block, and output it back to show it on my PC screen ?
Are there any existing IPs which accomplish this ?
P.S. The FPGA board is connected to my PC via JTAG cable, in case it's relevant.
The exchange of image data between the programmable logic (PL) and the processing system (PS) of the Zynq, can be established using direct memory access (DMA)/video direct memory access(VDMA).
This functionally is provided by Xilinx as an IP core. This IP core implements the receiving and transmitting of image data on PL side as an AXI stream.
On PS side the DMA can be made accessible by using the linux UIO. For this purpose you have to modify the device tree node of the DMA IP core in the device tree of the ARM core. If this is done, the DMA is available under /dev/ in the linux system.
Now it can be mapped to the user space using mmap(). By configuring the DMA, a memory area in the RAM of the PS has to be assigned to it. This memory area is used to implement a so called stream buffer. The DMA core uses this stream buffer to read or write image data. At the same time a linux application can access this memory area. This allows exchange of data between PS and PL.
A detailed description of the individual registers and the configuration procedure can be found in Xilinx's AXI DMA/VDMA product guide.
As far as the image data is available in the user space, the Ethernet connection could be used to send the image to the host PC. The JTAG connection is not the proper way to exchange image data between a host PC and the Zed board.
My project is to capture images and process them to move a wheelchair accordingly. I am using Nexys2 FPGA board for this purpose. Nexys2 has a usb port and the camera is also a usb camera. but i dont have the drivers in verilog which will make nexys2 and the camera communicate with each other. Please help me ill be very grateful.
Well, if you manage to write a driver for a USB camera in VErilog, you can sell that for a lot of money :)
Well, sarcasm aside, there is NO WAY you can access a USB camera in Verilog, unless you have a USB host implemented in your FPGA and have a CPU controlling it and have a SW driver for that camera.
There are alternatives to this, you can buy a camera which has an FPGA "friendly" connector like this one:
5 Mega Pixel Digital Camera Package
Which comes with the Verilog code that you can use in your project.
Sadly, the USB port on the Digilent Nexus 2 board does not have host capabilities, it can only act as a USB slave. The USB connection on the board is used for powering the board and configuring the FPGA and other onboard peripherals.
The newer Nexus 3 board has a second USB port however it has the same issue in that it can only act in slave mode. Also due to the configuration can only be used for mouse and keyboard input.
I'm trying to figure out if it's possible to stream directly from a webcam (IP Camera / Network Camera) to an RTMP Flash Server.
The purpose is to be able to set up a camera at a location and be able to stream directly from it to streaming services such as DaCast or justin.tv without the need to have it hooked up to a computer that does the encoding. All it would need is a wireless connection.
Technically the camera would have to have it's own encoder (H.264) and a place where you can configure the Flash Media Server to stream to within it's built-in configuration.
Parts of this answer comes from: AskUbuntu: Security camera system server
Certain IP Cameras, in several flavors, brand names and models, provide their own web page for setup/preview/monitor, from which you can extract the portions of code that you can use in your own project in a website.
You don't say what do you have in mind by streaming to justin.tv or other web based streaming service but if what you wish to achieve is to get the benefit of the popularity of the web based streaming service itself to gain audience, then this solution IS NOT FOR YOU.
But if you are using a web based streaming service just to gather the portions of code in order to be used in a customized website of your own, then you can use the code provided by your own IP Camera.
As far as I know, the majority of the IP cameras, as those shown in
this virtual shop, starting from $ 945.00 Mexican Pesos (almost
100 US Dollars), and this D-Link DC-900 (the majority of them
tested by me) resolves the motion detection, scheduled recording and
remote control by itself (there are just a few which features 360°
movement, remote controlled).
How to reach your cameras from outside is as easy as getting a Dynamic
Domain Name Service and to use it in your modem/router or, if you have
fixed IP then you don't have a problem, you will also be in need to
route the specific ports to the cameras and make the cameras respond
to the petitions of a specific port.
Everything can be monitored/controlled via web browser, like in this
example of my security system which is embedding 3 cameras (1 of them
remote controlled) in a single web page. (blurred where needed for
privacy).
The remote controlled camera is the one shown here, with two-way
audio (yes, you can speak to people close to the camera), wireless and
infra-red night vision. (Sorry, I don't sell these cameras but I
purchased over there in Mexico City.)
In the examples provided here I am using the portions of code of the original IP Camera web page monitoring system, as shown in the next picture:
Original DCS-900 Camera's Web Based Application
So I think this can be done directly from the IP Camera web application but as I mentioned before, if what you wish is to get advantage on the web based streaming service (for getting audience), you may wish to consider a different choice.
Good luck!
You can use CamStreamer RTMP client - An application which is running directly in Axis IP camera. The camera with CamStreamer pushes the video to any RTMP streaming service (LiveStream, uStream, YouTube Live,...).