Horizontal dislocations in DirectX video rendering
I'm working on a Qt widget application which using DirectX to do video rendering. The graphics pipeline is from scratch. The video is from DeckLink capture card, and it's 3840x2160p59.94.
But the problem is, the frame have horizontal dislocations. I have no idea what's wrong. Anyone with any discussion, thanks a lot.
Related
I'm currently working on app that records video of user, binds particle emitters to hands landmarks on preview. In result effects are shown on camera preview, but they aren't captured on video.
I saw a great tutorial on AVVideoCompositionCoreAnimationTool https://www.raywenderlich.com/6236502-avfoundation-tutorial-adding-overlays-and-animations-to-videos, but that way it only is possible to rendere animations on already recoded video.
I wonder if there is any chance to use AVVideoCompositionCoreAnimationTool for recording video form camera and animations in real time. Or if you know other method to do it, without diving deep in metal and so on.
Thanks in advance!
I have an Augmented Reality functionality made using Unity + Vuforia plugin which I integrated into the iOS application. The app uses the camera as background and when you navigate camera to some marker 3D object will appear on it.
My task is to add buttons which will start and stop capture video (or image) from the camera. The output should be a video with camera scene + 3D object.
I made some investigation, but the only solution I found is to convert the view of AVCaptureVideoPreviewLayer on which camera preview is showing to a video (or image). But from my opinion, this solution is inefficient and not flexible.
Is there any way to get a current instance of the AVCaptureSession from Unity (or maybe Vuforia plugin)? Or maybe there is another way to solve my problem?
Any pieces of advice or guides will be very helpful.
I don't think you should use AVCaptureSession to get the preview and even do the capture operation in Cocoa-Touch instead you should capture the image in Unity and pass the data to Cocoa-Touch native API.
Here is the link how to capture the screenshot in Unity.
I'm working on a project and one of the features is to play a video from a rtmp path. I'm trying to find the best way to draw the frames. Right now I'm using a UIImageView and works, but It's not very elegant nor efficient. OpenGL might do the trick but I've never used it before. Do you have some ideas what should I use? If you agree with OpenGL can you give me a code snippet that I could use for drawing the frames?
Thank you.
Background is i'm developing a component which is like a digital signage, with moving ticker text runs from left to right in loop, lays over a playing video or still image.
But the ticker text is not fluid enough, especially when a video is loaded from inside or outside the application, it jerks very much.
I have been stuck by this issue for years, technologies I have tested include D3D and WPF.
I never tried OpenGL, however persoannly i think it equals to D3D.
Can you give me some guidelines or even samples?
I'm hoping to use IOS5 AV Foundation with or without Open GL to record video from the camera and overlay/merge another video clip on top using some form of alpha channel compositing / foreground matting.
A sample use case of the combined output may be a video of an animated character interacting with the the user's recorded video clip from the iPhone/iPad camera.
Is this possible right now with IOS5 or potentially with Brad Larson's GPUImage framework? Can the alpha channels of the two video sources be combined easily?
If anyone has any sample code they could share, or offer any guidance I'd be really appreciative.
The Apple AVEditDemo (+ accompanying WWDC 2010 video) would be a start. Doesn't show video overlays w/ alpha but if you haven't worked with AVFoundation before this is an excellent intro.
Here's another good walkthrough video-composition-with-ios