I am a newbie on this all stuff related with DirectX, render video, render images and so on..
So I need your help:
I have a WPF application. This application needs to render a lot of videos (at least 10 videos in parallel) and this videos must be render on GPU, not CPU. I already tried to use WPF-MediaKit but it consumes a lot of memory(RAM) and CPU. I also tried the WPF MediaElement and it also consumes a lot of memory(RAM).
So, my question is:
1 - How you build an application to render a lot of video at the same time on GPU with the minimum of possible memory? What technologies would you use?
Note: If you suggest more than one technology/tools that need to work together, can you explain how they work together, please (Like I'm so dumb)?! For example: To render a video and show it on screen you need to use EVR with DXVA, ... because .... OR if you can give me some application examples, I was very grateful.. :) (Sorry but working in low-level is still a little bit hard for me and I need some orientation on this)
Related
I want to use Electron as a debug overlay for a Vulkan Render Engine im building. Since i have a lot of requirements on this debug tool writing one in engine myself would take way too long. I would like to use electron instead of Qt or similar since i feel its a lot more powerful and flexible with less effort (once its working).
The problem is now that i somehow either have to get my render output to electron or electrons output to my engine. As far as i can tell the easiest solution would be to copy the data back to cpu then transfer it. But that would be extremely slow and cost a lot of bandwidth. So i was wondering if there is a better solution.
I have two ideas to make it work but i didnt find any ways to implement them or even anyone talking about it.
The first would be to have electron configured to run on the gpu somehow get the handle for the output texture and importing it into my render engine using vulkan external memory. However as i have no experience with chromium and there doesnt seem to be anyone else that did it this i dont think it would work out to well.
The second idea was to do the opposite. Using a canvas element with webgl and again using vulkan external memory to copy the output of my engine to a texture and displaying it. I have full control over the draw process here so i think it would be a lot simpler and more stable. However again i found no way of setting up a webGL texture handle as an external memory object.
Is there any better way of doing this or some help on how to implement it?
My goal is as follows: I have to read in a video that is stored on the sd card, process it frame for frame and then store it in a new file on the SD card again,In each image to do image processing.
At first I wanted to use opencv for android but I did not seem to be able to read the video
here.
I am guessing you already know that doing this on a mobile device or any compute limited devices is not ideal, simply because video manipulation is very computer intensive which translates to slow execution and heavy battery usage on many devices. If you do have the option to do the processing on the server side it is definitely worth considering.
Assuming that for your use case you need to do it on the mobile device, then OpenCV on Android will now allow you to read in a video and access each frame - #StephenG mentions this in his answer to the question you refer to above.
In the past, functionality like this did not get ported to the Android OpenCv as the guidance was to use ffmpeg for frame grabbing on Android devices.
According to more recent documentation, however, this should be available for Android now using the VideoCapture class (note I have not used this myself...):
http://docs.opencv.org/java/2.4.11/org/opencv/highgui/VideoCapture.html
It is worth noting that OpenCV Android examples are all currently based around Eclipse and if you want to use Studio, getting things up an running initially can be quite tricky. The following worked for me recently, but as both studio and OpenCV can change over time you may find you have to do some forum hunting if it does not work for you:
https://stackoverflow.com/a/35135495/334402
Taking a different approach, you can use ffmpeg itself, in a wrapper in Android, for tasks like this.
The advantage of the wrapper approach is that you can use all the usual command line syntax and there is a lot of info on the web to help you get the right parameters.
The disadvantage is that ffmpeg was not really designed to be wrapped in this way so you do sometimes see issues. Having said that it is a common approach now and so long as you choose a well used wrapper library you should at least have a good community to discuss any issues you come across with. I have used this approach in a hand crafted way in the past but if I was doing it again I would use one of the popular examples such as:
https://github.com/WritingMinds/ffmpeg-android-java
I need to render video from multiple IP cameras into several controls within the client application.
On top of the video, I should be able to add some OSD such as timestamp and camera name.
What I'm trying to do has nothing to do with 3D since we're talking about digital video with some text on it.
Which API is more suitable for this purpose? Direct3D or Direct2D?
Performance should also be a consideration here.
It used to be that Direct2D was a poor choice for Windows Phone (if you care about that system) because it wasn't supported, but Win Phone 8.1 has it now, so less of an issue.
My experience with D2D was that it offered fast, high quality 2D rendering, and I would say it is a good choice.
You might want to take a look at this article on Code Project. That looks appropriate for your purposes.
If you are certain you only need MS system support, then you're all set.
Another way to go would be a cross platform system like nanovg, which offers nice 2D rendering and would work on a Mac. Of course, you'd need to figure out how to do the video part on non windows systems.
Regarding D3D, you could certainly do it that way, but my guess would be it would make some things trickier to do. Don't forget you can combine the two as well...
I would like to create an XNA application and have a live stream with the output of that application (I can render everything in a separate RenderTarget and just use that as a source).
I need this because the application will be shown on a big outdoor display and the only way to get live content there is using live streaming.
Is this possible? How much lag should I expect between the real time rendering and what is actually streamed and displayed on the big panel?
Do you really need to implement this in your application? There are plenty of tools available that will just do that for you.
See this question where software like XSplit is suggested.
It would definitely be easier for you not to have to write this!
We are planning a Wep App for a Hackathon that's happening in about 2 weeks.
The app basic functions are:
The users are guided step-by-step to upload a video, audio and image.
The image is used as a cover for the audio. Making it into a video file.
The two video files are merged thus creating a single video from the initial three files.
So, my problem is:
How do you create a video from an audio with an image as "cover".
How do you merge this two videos.
We are thinking of using Heroku for deployment. Is there a way to do it using something like stremio?
What would be the best approach? A VPS running a C++ script? How's the easiest way to do it?
FFMPEG would be a good start as seen here
https://stackoverflow.com/a/6087453/1258001
FFMPEG can be found at http://ffmpeg.org/
Also another option that maybe over kill would be Blender 3d as it could also provide simular results and could be controlled via shell commands and maybe more flexible in terms of complexe needs for asset compositions.
In any case your gonna want a server that can run heavy rendering processes wich will require a large amount of ram and cpu processing. It maybe a good choice to go with a render farm that can run gpu as the main processor for rendering as that will give you more bang for your buck but could be very difficult to set up and kept running correctly. I would also say a VPS would not be a good choice for this. In any case the type of resources your gonna need also so happen to be the most expensive in terms of web server costs. Best of luck please update with your results.