Who calls Present() (if anyone) when using Steam's OpenVR? - directx

I'm new with both directx and openvr, and trying to wrap my head around how the OpenVR compositor API works. The docs call for rendering each eye and handing each one to the compositor via Submit(). But I'm a bit confused about how Present() factors in to this flow. I expected to need to call Present() to render each eye, but from examining some existing VR games, this doesn't happen. Present is called to render any view on the main (non-VR) monitor, but is not called at all for the stuff drawn by the compositor.
Does somebody else call Present() or something lower-level?

Present is to display in a traditional swap chain in a window on your screen. Because with VR, you use an alternative mechanisum and API to present the image to the HMD, you do not need a Present at all.
You only need one if you want to display a copy or anything else on the monitor along side the hmd.

Related

vidyo.io: Moving and resizing a video using VidyoConnector API

I downloaded the latest iOS package from vidyo.io and have successfully built my application integrated with the Vidyo libraries and using the VidyoConnector API.
When my app first comes up, I was very happy to see that a preview video appears on the screen just where I’d expect it to be! However, when moving the view to a different location, the video did not render quite how I intended.
The video did move to the x/y position on screen that I’d hoped but the size did not adjust to my new view dimensions. Then I found the VidyoConnectorShowViewAt API call and that did indeed resize my view but the positioning of the video then was off.
Is this the correct call to make when moving and resizing a view? Does anybody have any ideas what I could be doing wrong? Any help would be appreciated.
Sounds like you are pretty close. If you are just moving your view to different coordinates without resizing, then no API call is necessary. But if also resizing, then indeed use VidyoConnectorShowViewAt. My hunch is that your coordinates that you are passing are off, as x and y should be relative to the view itself and not to the main view. So try passing 0 and 0 as x and y and see if that helps.

Combining Cocos2d and GLKView to render the same data

I use cocos2d engine to render some animation to CCGLView which is placed on app main window next to regular UIViews. My application uses external screen and I would like to render on that screen exactly the same content as in CCGLView but without any other views. And I need to perform this operation in an efficient manner so taking screenshots of CCGLView is not an option.
As I understand there is no such option using cocos2d as It supports presenting only one scene at time (CCDirector updates only one CCGLView at time).
So my question is:
Is it possible to achieve this goal using GLKView? I have access to frame buffer object from CCGLView and I can read pixels from the buffer. I think that the best option would be to use cocos2d runloop and perform this operation next to regular cocos2d rendering. Unfortunately I don't know too much about openGLES and I don't know how I can achieve this. cocos2d uses openGLES 2.0.
Edit:
For now the only suggestion came from #s1ddok (thanks) and the idea is to use CCRenderTexture to draw into CCGLView placed on main window and use CCRenderTexture's data to render to the external window. But I still don't understand how I can render the texture for the second time - this time to another view. Using another CCGLView would require configuring this view as a target for CCDirector, how can I do that? Moreover second CCGLView will share EAGLContext with the first one... So how to force cocos2d to render to the second CCGLView? Any help is appreciated!
I guess the best way you can go is CCRenderTexture. Render the whole scene onto it and then display the data on external screen.
It is a common practice for multiple purposes, for example, if you need to apply shader to a whole scene or something.
This actually will allow you to render scene only once every frame, and then use the same data for UIKit and external screen.

iOS app: OpenGL Views within Storyboard-based Application

I have been working on an iOS, OpenGL-based app for the past few months. During this time, I have created both the main UIWindow and a single UIView in code, as opposed to using a storyboard. An important item to note is that I create an instance of EAGLView (used in many Apple examples), which inherits from UIView.
The code base I am working with is quite extensive, and among other things, it uses a separate rendering thread. I'll come back to this point near the end of this post.
With this in mind, I am now at the point that I want to add native UI support. To do this, I am using a storyboard (for the first time). My current setup consists of a main/root view with two buttons. Each button uses a modal segue to place a new view on the screen.
To reuse as much code as possible, I have specified that the views I segue to are of type EAGLView (as opposed to UIView). The only change I've had to make is that instead of initializing with "initWithFrame", I now initialize with "initWithCoder".
Other than moving to a storyboard, nothing else in the code base has changed. However, when I segue to an EAGLView, nothing renders -- all I see is white. I am hesitant to use GLKit because it duplicates much of the functionality I have already written (I had everything rendering just fine prior to using a storyboard). In addition, GLKit provides a callback for rendering, whereas, I have a separate render thread.
My scenario sounds a lot like this post:
OpenGL iOS view does not paint
I have GL error checks for every call (or for every group of calls), and no errors are being reported. What's even stranger is that when I capture an OpenGL ES frame for debugging (in Xcode), the debugger actually shows the content I expect to see.
Any ideas here? I am stumped.
Please try the following:
Verify that you stop rendering in the view that you're segueing from (stopping times etc.) - this view is still alive since you only pushed a new EAGLView on top of it.
Use XCode's OpenGL ES Frame capture to debug your OpenGL state in the new view. Verify that you're not missing binding to textures or other objects.
If the above doesn't work - write the simplest rendering possible (simple quad, for example) and debug that code.

OpenGL iOS view does not paint

I am trying to use OpenGL to paint a view which is a subview of another view. I have created a view class for this purpose, and if I use this class in a simple test application it works fine. However, if I place an instance of this class on a particular page of my app, the OpenGL painting does not display anything. I am certain that the view is visible (I can set a background color, and that is displayed, and I can receive touch events). I can also trace through the OpenGL initialization and paint routines, and everything seems fine. My paint routine IS being called, and I call glGetError frequently and no errors are returned. I can compare tracing the routine with the case that works, and everything seems pretty much the same, but nothing paints (I even have simply tried doing nothing but clearing the window to black but that does nothing either).
The code for the app that does not work is far to complex to post here. I assume that I am doing something wrong, but for the life of me I cannot figure out what. Can anyone give me any ideas about why the OpenGL painting would appear to succeed and yet not draw anything, or suggest a strategy for figuring this out?
Thanks.
The link between OpenGL and the outside world is platform specific and not part of the core API. So a problem there wouldn't affect the result of glGetError.
In the case of iOS the relevant call is EAGLContext -presentRenderbuffer:, which will work provided you've used renderbufferStorage:fromDrawable: to create storage for a render buffer from a CAEAGLLayer. You probably want to inspect the return result of presentRenderbuffer: and the code around that to look for an error rather than the internal GL state.

Display active window for video in custom allocator

I am displaying video over panel using custom allocator sample, for some file it play video on some active window and this show separately . how can i avoid this unwanted window to be open.
Usually then video is played in ActiveMovie window when the decoder and renderer could not agree on the connection, so graph builder is using default renderer (if you are automatically constructing your graph by executing RenderFile method), which is played separately. Check your code in InitializeDevice method of your allocator, if InitializeDevice always failing then your video will be rendered in default renderer.
Make sure you are using VMR9Mode_Renderless mode. And if you are not using any mixing in VMR7/9 I suggest removing any calls to the SetNumberOfStreams method, it makes things simplier.
Quite good example of custom allocator usage can be found here.
What do you mean? When you start playing the file it opens a window that contains the actual video?
If so you probably want to investigate the Video Mixing Render filter. You can create your own custom allocator that allows you to intercept the present call which will then allow you to draw the video wherever, and however, you want.
Or, and personally i think this is easier, you want to investigate the dump filter example and then use that to build your own renderer. That way when you receive the frame you can do whatever you like with it without faffing about with internals. Its very simple writing filters if you don't want them to be available outside of your application.
Edit: Have you QueryInterfaced the IVMRFilterConfig9 interface and SetRenderingMode to VMR9Mode_Windowless?

Resources