Does anyone have a independent evaluation of using Deepstream Gstream pipline instead of a conventional python code?
Gstreamer:
USB-CAM -> Appsink -> (CPU to GPU) AI analysis (TensorRT) -> CV2.ImShow
I think I understand that deepstream uses the GPU only in the gstreamer pipeline but is it faster. Any comparison. Where/what do I gain by using the deepstreamer?
GStreamer is a pipeline-based multimedia framework that links together a wide variety of media processing systems to complete complex workflows. For instance, GStreamer can be used to build a system that reads files in one format, processes them, and exports them in another. The formats and processes can be changed in a plug and play fashion.
Deepstream utilizes gstreamer to do some tasks in a cascade routine. As
Deepstream feed input streams to the pipeline and since gstreamer has different plugins, streams pass through this pipeline. Nvidia made some plugin in addition to gstreamer plugin.
For example pgie, tracker , tiler, nvvidconv, nvosd, transform, sink and ... are some of them. Deepstream runs main loop on GPU, so primary model and secondary models could do inferences. Accessing to output of each plugins- like pgie or sgie- is possible via metadata. These metadatas includes frame data, object location, time of occurrence and .... It is possible to access these metadata via some function in plugins like gstdsexample. Having these data make it easy to do process on a stream like data.
Alongside these benefit it(Deepstream) has some disadvantages: making a complex pipeline and working with it, is hard.
When you have a stream data(video, text, speech, image and ...) it's better to utilize this pipeline. Also, python and C++ implementation is possible.
Related
I've looked everywhere and I still can't figure it out. I know of two associations you can make with streams:
Wrappers for backing data stores meant as an abstraction layer between consumers and suppliers
Data becoming available with time, not all at once
SIMD stands for Single Instruction, Multiple Data; in the literature the instructions are often said to come from a stream of instructions. This corresponds to the second association.
I don't exactly understand why the Streaming in Streaming SIMD Extensions (or in Streaming Multiprocessor either), however. The instructions are coming from a stream, but can they come from anywhere else? Do we or could we have just SIMD extensions or just multiprocessors?
Tl;dr: can CPU instructions be non-streaming, i.e. not come from a stream?
SSE was introduced as an instruction set to improve performance in multimedia applications. The aim for the instruction set was to quickly stream in some data (some bit of a DVD to decode for example), process it quickly (using SIMD), and then stream the result to an output (e.g. the graphics ram). (Almost) All SSE instructions have a variant that allows it to read 16bytes from memory. The instruction set also contains instructions to control the CPU cache and HW prefetcher. It's pretty much just a marketing term.
Trying to find out if you can use multiple files for your dataset in Amazon Sagemaker BlazingText.
I am trying to use it in Text Classification mode.
It appears that it's not possible, certainly not in File mode, but wondering about whether Pipe mode supports it. I don't want to have all my training data in 1 file, because if it's generated by an EMR cluster I would need to combine it afterwards which is clunky.
Thanks!
You are right in that File mode doesn't support multiple files (https://docs.aws.amazon.com/sagemaker/latest/dg/blazingtext.html).
Pipe mode would in theory work but there are a few caveats:
The format expected is Augmented Manifest (https://docs.aws.amazon.com/sagemaker/latest/dg/augmented-manifest.html). This is essentially Json lines, for instance:
{"source":"linux ready for prime time ", "label":1}
{"source":"bowled by the slower one ", "label":2}
and then you have to pass the _ AttributeNames_ argument to the createTrainingJob SageMaker API (it is all explained in the link above).
With Augmented Manifest, currently only one label is supported.
In order to use Pipe mode, you would need to modify your EMR job to generate Augmented Manifest format, and you could only use one label per sentece.
At this stage, concatenating the files generated by your EMR job into a single file seems like the best option.
I'm on my project about rtsp streams, merging and stuff and one of things I need to do is to pass frames from rtsp stream to a handler or buffer or something so another person could process it with f.e. OpenCV.
how could I do it with gstreamer?
Thanks!
GStreamer already has opencv based plugins (. So the best way is to write a similar plugin that applys your opencv code. There are elements called appsrc and appsink to source data from ann app or receive data by an app, but there is not appfilter element. One could use a pad-probe for it, but it is not a good approach.
I have a Ruby on Rails application where users would be uploading videos and I'm looking for a system for converting videos uploaded by the users to FLV format.
Currently we are using FFMPEG and since video conversion is a heavy task it seems to be taking a lot of time and a lot of CPU resources..
We are looking if we can use map-reduce / Hadoop framework for implementing video conversion, as it is completely distributed.
Is it a good option to use map-reduce for video conversion in real time? If it is so, how can that be implemented?
Note: Each video file size is around 50 - 60 MB.
Your requirement is "Real Time" conversion. Keep in mind that Hadoop is a "Batch Processing Framework".
IMHO, I say Hadoop is a poor choice here. A better solution would be definitely to use something like Storm:
Apache Storm is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing.
Personally, I implemented a project similar to yours using Storm and the result was amazing.
Another option is to use a distributed Actors model, such as Akka.io or Erlang. But since you are a Ruby shop, Storm or Akka would be easier for your team.
Platform: amd_64
Operating System: Ubuntu 8.10
Problem:
The current release of OpenCV (2.1 at time of writing) and libdc1394 doesn't properly interface with the new USB-interface PointGrey High-Res FireFlyMV Color camera.
Does anyone have this camera working with OpenCV on Ubuntu?
Currently, I'm working on writing my own frame-grabber using PointGrey's FlyCapture2 SDK, which works well with the camera. I'd like to interface this with OpenCV, by converting each image I grab into an IplImage object. When I write OpenCV programs, I use CMake. The example code for the FlyCapture2 SDK uses fairly simple makefiles. Does anyone know how I can take the information from the simple FlyCapture2 makefile so I can include the appropriate lines in CMakeLists.txt for my CMake build routine?
Not a simple answer (sorry) - but.
Generally you don't want to use cvCaptureCam() for high performance cameras beyond initial tests that they work. Even for standard interfaces like firewire It is very limited in what features of the camera it can control, it doesn't handle threading well and the performance is poor - especially at high data rates.
The more common way is to control the camera with the makers own SDK and output frames in a form (cv::mat/iplimage) that openCV can process. All openCV image types are very flexible in being able to share data with the camera API and specify padding/row striping etc so you should be able to design it so there is no unnecessary copying.