Resize Texture2D SharpDX - directx

How can I resize texture2d of SharpDX? I'm using SharpDX to Duplicate the screen and I use MediaFoundation to encode the texture into a video file. My problem is when I open an application into fullscreen and has a different resolution from system resolution I got a blank screen on my recording. Is there a better way I can resize the texture before encoding to mediafoundation without suffering performance? I'm using hardware accelerated. Thanks.

It depends on how exactly you use Media Foundation, but you can use the Video Processor MFT explicitly. You need to add this MFT to the topology if you use IMFMediaSession/IMFTopology. Or initialize and process the samples with this MFT if you use Sink Writer. In this case you need to supply the DX manager to the MFT, using MFT_MESSAGE_SET_D3D_MANAGER.
This MFT is only available on Windows 8 and higher. So if you need to support an older Windows version you can use Video Resizer, but it is not hardware accelerated.
Another option to resize the texture would be to create a render target of the desired size and draw the texture to it. After that you need to feed that render target to the encoder.

Related

Webgl Upload Texture Data to the gpu without a draw call

I'm using webgl to do YUV to RGB conversions on a custom video codec.
The video has to play at 30 fps. In order to make this happen I'm doing all my math every other requestAnimationFrame.
This works great, but I noticed when profiling that uploading the textures to the gpu takes the longest amount of time.
So I uploaded the "Y" texture and the "UV" texture separately.
Now the first "requestAnimationFrame" will upload the "Y" texture like this:
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, yTextureRef);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.LUMINANCE, textureWidth, textureHeight, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, yData);
The second "requestAnimationFrame" will upload the "UV" texture in the same way, and make a draw call to the fragment shader doing the math between them.
But this doesn't change anything in the profiler. I still show nearly 0 gpu time on the frame that uploads the "Y" texture, and the same amount of time as before on the frame that uploads the "UV" texture.
However if I add a draw call to my "Y" texture upload function, then the profiler shows the expected results. Every frame has nearly half the gpu time.
From this I'm guessing the Y texture isn't really uploaded to the gpu using the texImage2d function.
However I don't really want to draw the Y texture on the screen as it doesn't have the correct UV texture to do anything with until a frame later. So is there any way to force the gpu to upload this texture without performing a draw call?
Update
I mis-understood the question
It really depends on the driver. The problem is OpenGL/OpenGL ES/WebGL's texture API really sucks. Sucks is a technical term for 'has unintended consequences'.
The issue is the driver can't really fully upload the data until you draw because it doesn't know what things you're going to change. You could change all the mip levels in any order and any size and then fix them all in between and so until you draw it has no idea which other functions you're going to call to manipulate the texture.
Consider you create a 4x4 level 0 mip
gl.texImage2D(
gl.TEXTURE_2D,
0, // mip level
gl.RGBA,
4, // width
4, // height
...);
What memory should it allocate? 4(width) * 4(height) * 4(rgba)? But what if you call gl.generateMipmap? Now it needs 4*4*4+2*2*4+1*1*4. Ok but now you allocate an 8x8 mip on level 3. You intend to then replace levels 0 to 2 with 64x64, 32x32, 16x16 respectively but you did level 3 first. What should it do when you replace level 3 before replacing the levels above those? You then add in levels 4 8x8, 5 as 4x4, 6 as 2x2, and 7 as 1x1.
As you can see the API lets you change mips in any order. In fact I could allocate level 7 as 723x234 and then fix it later. The API is designed to not care until draw time when all the mips must be the correct size at which point they can finally allocate memory on the GPU and copy the mips in.
You can see a demonstration and test of this issue here. The test uploads mips out of order to verify that WebGL implementations correctly fail with they are not all the correct size and correctly start working once they are the correct sizes.
You can see this was arguably a bad API design.
They added gl.texStorage2D to fix it but gl.texStorage2D is not available in WebGL1 only WebGL2. gl.texStorage2D has new issues though :(
TLDR; textures get uploaded to the driver when you call gl.texImage2D but the driver can't upload to the GPU until draw time.
Possible solution: use gl.texSubImage2D since it does not allocate memory it's possible the driver could upload sooner. I suspect most drivers don't because you can use gl.texSubImage2D before drawing. Still it's worth a try
Let me also add that gl.LUMIANCE might be a bottleneck as well. IIRC DirectX doesn't have a corresponding format and neither does OpenGL Core Profile. Both support a RED only format but WebGL1 does not. So LUMIANCE has to be emulated by expanding the data on upload.
Old Answer
Unfortunately there is no way to upload video to WebGL except via texImage2D and texSubImage2D
Some browsers try to make that happen faster. I notice you're using gl.LUMINANCE. You might try using gl.RGB or gl.RGBA and see if things speed up. It's possible browsers only optimize for the more common case. On the other hand it's possible they don't optimize at all.
Two extensions what would allow using video without a copy have been proposed but AFAIK no browser as ever implemented them.
WEBGL_video_texture
WEBGL_texture_source_iframe
It's actually a much harder problem than it sounds like.
Video data can be in various formats. You mentioned YUV but there are others. Should the browser tell the app the format or should the browser convert to a standard format?
The problem with telling is lots of devs will get it wrong then a user will provide a video that is in a format they don't support
The WEBGL_video_texture extensions converts to a standard format by re-writing your shaders. You tell it uniform samplerVideoWEBGL video and then it knows it can re-write your color = texture2D(video, uv) to color = convertFromVideoFormatToRGB(texture(video, uv)). It also means they'd have to re-write shaders on the fly if you play different format videos.
Synchronization
It sounds great to get the video data to WebGL but now you have the issue that by the time you get the data and render it to the screen you've added a few frames of latency so the audio is no longer in sync.
How to deal with that is out of the scope of WebGL as WebGL doesn't have anything to do with audio but it does point out that it's not as simple as just giving WebGL the data. Once you make the data available then people will ask for more APIs to get the audio and more info so they can delay one or both and keep them in sync.
TLDR; there is no way to upload video to WebGL except via texImage2D and texSubImage2D

How do you use Open GL ES 2.0 (shaders) for video processing?

This question is about iOS. On Android, it is very easy to use OpenGL ES 2.0 to render a texture on a view (for previewing) or to send it to an encoder (for file writing). I haven't been able to find any tutorial on iOS to achieve video playback (previewing video effect from a file) and video recording (saving a video with an effect) with shader effects. Is this something possible with iOS?
I've come across a demo about shaders called GLCameraRipple but I have no clue about how to use it more generically. Ex: With AVFoundation.
[EDIT]
I trampled on this tutorial about OpenGL ES, AVFoundation and video merging on iOS while searching for a snippet. That's another interesting entry door.
It's all very low-level stuff over in iOS land, with a whole bunch of pieces to connect.
The main thing you're likely to be interested in is CVOpenGLESTextureCache. As the CV prefix implies, it's part of Core Video, in this case its primary point of interest is CVOpenGLESTextureCacheCreateTextureFromImage which "creates a live binding between the image buffer and the underlying texture object". The documentation further provides you with explicit advice on use of such an image as a GL_COLOR_ATTACHMENT — i.e. the texture ID returned is usable both as a source and as a destination for OpenGL.
The bound image buffer will be tied to a CVImageBuffer, one type of which is a CVPixelBuffer. You can supply pixel buffers to an AVAssetWriterInputPixelBufferAdaptor wired to an AVAssetWriter in order to output to a video.
In the other direction, an AVAssetReaderOutput attached to a AVAssetReader will vend CMSampleBuffers which can be queried for attached image buffers (if you've got video coming in and not just audio, there'll be some) that can then be mapped into OpenGL via a texture cache.

Camera direct to OpenGL texture on iOS

On Android, it is possible to make the camera write its output directly to an OpenGL texture (of type GL_TEXTURE_EXTERNAL_OES), avoiding buffers on the CPU altogether.
Is such a thing possible on iOS?
The output you get from the camera in iOS is a CMSampleBufferRef, with a CVPixelBufferRef inside. (See documentation here). iOS from version 5 has CVOpenGLESTextureCache in the CoreVideo framework, which allows you to create an OpenGL ES texture using a CVPixelBufferRef, avoiding any copies.
Check the RosyWriter sample in Apple's developer website, it's all there.

Rendering Video to an OpenGL texture in iOS with scrubbing

I have used the following method iOS4: how do I use video file as an OpenGL texture? to get video frames rendering in openGL successfully.
This method however seems to fall down when you want to scrub (jump to a certain point in the playback) as it only supplies you with video frames sequentially.
Does anyone know a way this behaviour can successfully be achieved?
One easy way to implement this is to export the video to a series of frames, store each frame as a PNG, and then "scrub" by seeing to a PNG at a specific offset. That gives you random access in the image stream at the cost of decoding the entire video first and holding all the data on disk. This would also involve decoding each frame as it is accessed, that would eat up CPU but modern iPhones and iPads can handle it as long as you are not doing too much else.

iOS: Video as GL texture with alpha transparency

I'm trying to figure out the best approach to display a video on a GL texture while preserving the transparency of the alpha channel.
Information about video as GL texture is here: Is it possible using video as texture for GL in iOS? and iOS4: how do I use video file as an OpenGL texture?.
Using ffmpeg to help with alpha transparency, but not app store friendly is here:
iPhone: Display a semi-transparent video on top of a UIView?
The video source would be filmed in front of a green screen for chroma keying. The video could be untouched to leave the green screen or processed in a video editing suite and exported to Quicktime Animation or Apple Pro Res 4444 with Alpha.
There are multiple approaches that I think could potentially work, but I haven't found a full solution.
Realtime threshold processing of the video looking for green to remove
Figure out how to use the above mentioned Quicktime codecs to preserve the alpha channel
Blending two videos together: 1) Main video with RGB 2) separate video with alpha mask
I would love to get your thoughts on the best approach for iOS and OpenGL ES 2.0
Thanks.
The easiest way to do chroma keying for simple blending of a movie and another scene would be to use the GPUImageChromaKeyBlendFilter from my GPUImage framework. You can supply the movie source as a GPUImageMovie, and then blend that with your background content. The chroma key filter allows you to specify a color, a proximity to that color, and a smoothness of blending to use in the replacement operation. All of this is GPU-accelerated via tuned shaders.
Images, movies, and the live cameras can be used as sources, but if you wish to render this with OpenGL ES content behind your movie, I'd recommend rendering your OpenGL ES content to a texture-backed FBO and passing that texture in via a GPUImageTextureInput.
You could possibly use this to output a texture containing your movie frames with the keyed color replaced by a constant color with a 0 alpha channel, as well. This texture could be extracted using a GPUImageTextureOutput for later use in your OpenGL ES scene.
Apple showed a sample app at WWDC in 2011 called ChromaKey that shows how to handle frames of video passed to an OpenGL texture, manipulated, and optionally written out to a video file.
(In a very performant way)
It's written to use a feed from the video camera, and uses a very crude chromakey algorithm.
As the other poster said, you'll probably want to skip the chromakey code and do the color knockout yourself beforehand.
It shouldn't be that hard to rewrite the Chromakey sample app to use a video file as input instead of a camera feed, and it's quite easy to disable the chormakey code.
You'd need to modify the setup on the video input to expect RGBA data instead of RGB or Y/UV. The sample app is set up to use RGB, but I've seen other example apps from Apple that use Y/UV instead.
Have a look at the free "APNG" app on the app store. It shows how an animated PNG (.apng) can be rendered directly to an iOS view. The key is that APNG supports an alpha channel in the file format, so you don't need to mess around with chroma tricks that will not really work for all your video content. This approach is more efficient that multiple layers or chroma tricks since another round of processing is not needed each time a texture is displayed in a loop.
If you want to have a look at a small example xcode project that displays an alpha channel animation on the side of a spinning cube with OpenGL ES2, it can be found at Load OpenGL textures with alpha channel on iOS. The example code shows a simple call to glTexImage2D() that uploads a texture to the graphics card once for each display link callback.

Resources