DirectX flickering video - directx

Alright, so I wrote a custom VMR9 Allocator/Presenter which seems to work fine. However, when I attempt to copy video frames from the Allocator/Presenter surfaces into my application surfaces, the video appears to flicker. Audio playback is fine so I'm fairly certain it's not an issue of the machine being bogged down or anything. This is the code I have in my render loop.
g_pd3dDevice->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_ARGB(255, 0, 0, 0), 1.0f, 0);
// render the scene
if (SUCCEEDED(g_pd3dDevice->BeginScene()))
{
//g_pd3dDevice->SetRenderTarget(0, g_pd3dSurface);
g_pd3dDevice->StretchRect(vmr9_ap->renderSurface, src, g_pd3dSurface, dest, D3DTEXF_NONE);
// end the scene
g_pd3dDevice->EndScene();
}
However, if I change it to this (commenting out clearing the buffer)
// g_pd3dDevice->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_ARGB(255, 0, 0, 0), 1.0f, 0);
// render the scene
if (SUCCEEDED(g_pd3dDevice->BeginScene()))
{
//g_pd3dDevice->SetRenderTarget(0, g_pd3dSurface);
g_pd3dDevice->StretchRect(vmr9_ap->renderSurface, src, g_pd3dSurface, dest, D3DTEXF_NONE);
// end the scene
g_pd3dDevice->EndScene();
}
this flickering goes away. I'm worried that this is somehow bad form/hackish and might cause more problems than it solves. Does anyone have any experience in this area? Is there a better solution?
Thanks!

If you intend to repaint the entire viewport every frame there is no reason to do a clear, and it can actually yield a lot of performance gains, so go for it! As for your flickering, that might be something different. Are you doing you're drawing in a WM_PAINT message? If so, you may want to also intercept the WM_ERASEBKGND message and simply return 1 when you get it. This prevents windows from trying to erase the background and has helped me get rid of some flickering in the past.
FYI: Ever do the noclip cheat in Doom or Quake, and when you walk outside a wall everything starts leaving "trails" all over? That's because they're not clearing the back buffer, since under normal circumstances the entire scene would be redrawn every time anyway. I say if it's good enough for id it's good enough for me! :)
Edit: Oh, and on more thing! I'm not sure if it's required or not, but I always do my clear AFTER calling BeginScene(). Could also be contributing to your flicker.

TBH i reckon you are best off writing your own directshow render filter that copies the data directly to a texture and then drawing a quad over the screen with the texture. You'd get much better performance. Writing a render filter is actually pretty easy. Especially when you appreciate you don't have to expose it to the operating system so most of the difficult DirectShow hurdles don't need t be jumped.
Edit: Look up the "Dump Filter" it comes as part of Microsoft's DirectShow helper code ...

I was faced with the same problem. In my case, the flickering reason was in StretchRect call inside of a BeginScene/EndScene pair.

Related

How to properly use setNeedsDisplayInRect for iOS apps?

I'm on Yosemite 10.10.5 and Xcode 7, using Swift to make a game targeting iOS 8 and above.
EDIT: More details that might be useful: This is a 2D puzzle/arcade game where the player moves stones around to match them up. There is no 3D rendering at all. Drawing is already too slow and I haven't even gotten to explosions with debris yet. There is also a level fade-in, very concerning. But this is all on the simulator so far. I don't yet have an actual iPhone to test with yet and I'm betting the actual device will be at least a little faster.
I have my own Draw2D class, which is a type of UIView, set up as in this tutorial. I have a single NSTimer which initiates the following chain of calls in Draw2D:
[setNeedsDisplay]; // which calls drawRect, which is the master draw function of Draw2D
drawRect(rect: CGRect)
{
scr_step(); // the master update function, which loops thru all objects and calls their individual update functions. I put it here so that updating and drawing are always in sync
CNT = UIGraphicsGetCurrentContext(); // get the curret drawing context
switch (Realm) // based on what realm im in, call the draw function for that realm
{
case rlm.intro: scr_draw_intro();
case rlm.mm: scr_draw_mm();
case rlm.level: scr_draw_level(); // this in particular loops thru all objects and calls their individual draw functions
default: return;
}
var i = AARR.count - 1; // loop thru my own animation objects and draw them too, note it's iterating backwards because sometimes they destroy themselves
while (i >= 0)
{
let A = AARR[i];
A.scr_draw();
i -= 1;
}
}
And all the drawing works fine, but slow.
The problem is now I want to optimize drawing. I want to draw only in the dirty rectangles that need drawing, not the whole screen, which is what setNeedsDisplay is doing.
I could not find any tutorials or good example code for this. The closest I found was apple's documentation here, but it does not explain, among other things, how to get a list of all dirty rectangles so far. It does not also explicitly state if the list of dirty rectangles is automatically cleared at the end of each call to drawRect?
It also does not explain if I have to manually clip all drawing based on the rectangles. I found conflicting info about that around the web, apparently different iOS versions do it differently. In particular, if I'm gonna hafta manually clip things then I don't see the point of apple's core function in the first place. I could just maintain my own list of rectangles and manually compare each drawing destination rectangle to the dirty rectangle to see if I should draw anything. That would be a huge pain, however, because I have a background picture in each level and I would hafta draw a piece of it behind every moving object. What I'm really hoping for is the proper way to use setNeedsDisplayInRect to let the core framework do automatic clipping for everything that gets drawn on the next draw cycle, so that it automatically draws only that piece of the background plus the moving object on top.
So I tried some experiments: First in my array of stones:
func scr_draw_stone()
{
// the following 3 lines are new, I added them to try to draw in only dirty rectangles
if (xvp != xv || yvp != yv) // if the stone's coordinates have changed from its previous coordinates
{
MyD.setNeedsDisplayInRect(CGRectMake(x, y, MyD.swc, MyD.shc)); // MyD.swc is Draw2D's current square width in points, maintained to softcode things for different screen sizes.
}
MyD.img_stone?.drawInRect(CGRectMake(x, y, MyD.swc, MyD.shc)); // draw the plain stone
img?.drawInRect(CGRectMake(x, y, MyD.swc, MyD.shc)); // draw the stone's icon
}
This did not seem to change anything. Things were drawing just as slow as before. So then I put it in brackets:
[MyD.setNeedsDisplayInRect(CGRectMake(x, y, MyD.swc, MyD.shc))];
I have no idea what the brackets do, but my original setNeedsDisplay was in brackets just like they said to do in the tutorial. So I tried it in my stone object, but it had no effect either.
So what do I need to do to make setNeedsDisplayInRect work properly?
Right now, I suspect there's some conditional check I need in my master draw function, something like:
if (ListOfDirtyRectangles.count == 0)
{
[setNeedsDisplay]; // just redraw the whole view
}
else
{
[setNeedsDisplayInRect(ListOfDirtyRecangles)];
}
However I don't know the name of the built-in list of dirty rectangles. I found this saying the method name is getRectsBeingDrawn, but that is for Mac OSX. It doesn't exist in iOS.
Can anyone help me out? Am I on the right track with this? I'm still fairly new to Macs and iOS.
You should really avoid overriding drawRect if at all possible. Existing view/technologies take advantage of any hardware capabilities to make things a lot faster than manually drawing in a graphics context could, including buffering the contents of views, using the GPU, etc. This is repeated many times in the "View Programming Guide for iOS".
If you have a background and other objects on top of that, you should probably use separate views or layers for those rather than redraw them.
You may also consider technologies such as SpriteKit, SceneKit, OpenGL ES, etc.
Beyond that, I'm not quite sure I understand your question. When you call setNeedsDisplayInRect, it will add that rect to those that need to be redrawn (possibly merging with rectangles that are already in the list). drawRect: will then be called a bit later to draw those rectangles one at a time.
The whole point of the setNeedsDisplayInRect / drawRect: separation is to make sure multiple requests to redraw a given part of the view are merged together, and drawing only happens once per redraw cycle.
You should not call your scr_step method in drawRect:, as it may be called multiple times in a cycle redraw cycle. This is clearly stated in the "View Programming Guide for iOS" (emphasis mine):
The implementation of your drawRect: method should do exactly one
thing: draw your content. This method is not the place to be updating
your application’s data structures or performing any tasks not related
to drawing. It should configure the drawing environment, draw your
content, and exit as quickly as possible. And if your drawRect: method
might be called frequently, you should do everything you can to
optimize your drawing code and draw as little as possible each time
the method is called.
Regarding clipping, the documentation of drawRect states that:
You should limit any drawing to the rectangle specified in the rect
parameter. In addition, if the opaque property of your view is set to
YES, your drawRect: method must totally fill the specified rectangle
with opaque content.
Not having any idea what your view shows, what the various method you call do, what actually takes time, it's difficult to provide much more insight into what you could do. Provide more details into your actual needs, and we may be able to help.

webgl strange rendering artifacts

UPDATE:
I have posted the code online to demonstrate the issue:
http://cutama.github.io/
To see the problem, position the mouse on the red rectangle and zoom in and out using the mouse scroller. After a while you will see the triangles flickering, then use the left mouse button to rotate.
Controls:
left mouse click&drag: orbit, middle mouse click&drag: pan, mouse scroll: zoom
END UPDATE
I have encountered a strange rendering problem with webgl. Whenever I moved the camera around, some triangles appear to be missing randomly. See pictures below.
I have been digging around but could not find the cause. Any ideas what may be causing this?
This is the normal rendering of the geometry:
Missing triangles:
Another missing triangles:
Did some debugging with webgl inspector.
GL Trace:
Clicking on the missing pixel show that it is being depth culled, but nothing is in front of it...so why is it culled?
Comparison with normal unculled pixel:
Vertex data inside the buffer. The triangles are very small. Is this causing the problem?
I don't know what makes this bug. But, I will list what you might need to consider.
Check you specified correct PrimitiveTopology(It should be gl.TRIANGLES in many case)
Check you rendered wether gl.DrawElements or gl.DrawArrays.
If you are using index buffer, you should use gl.DrawElements
Check you are using correct culling configuration.
Check you are using correct depth comparing function.(If you are not using depth test,you don't need to care about this)
Found the issue is caused by incorrect triangle indices in the model..still not sure why it would cause flickering.

iOS and multiple OpenGL views

I'm currently developping an iPad app which is using OpenGL to draw some very simple (no more than 1000 or 2000 vertices) rotating models in multiple OpenGL views.
There are currently 6 view in a grid, each one running its own display link to update the drawing. Due to the simplicity of the models, it's by far the simplest method to do it, I don't have the time to code a full OpenGL interface.
Currently, it's doing well performance-wise, but there are some annoying glitches. The first 3 OpenGL views display without problems, and the last 3 only display a few triangles (while still retaining the ability to rotate the model). Also there are some cases where the glDrawArrays call is going straight into EXC_BAD_ACCESS (especially on the simulator), which tell me there is something wrong with the buffers.
What I checked (as well as double- and triple-checked) is :
Buffer allocation seems OK
All resources are freed on dealloc
The instruments show some warnings, but nothing that seems related
I'm thinking it's probably related to my having multiple views drawing at the same time, so is there any known thing I should have done there? Each view has its own context, but perhaps I'm doing something wrong with that...
Also, I just noticed that in the simulator, the afflicted views are flickering between the right drawing with all the vertices and the wrong drawing with only a few.
Anyway, if you have any ideas, thanks for sharing!
Okay, I'm going to answer my own question since I finally found what was going on. It was a small missing line that was causing all those problems.
Basically, to have multiple OpenGL views displayed at the same time, you need :
Either, the same context for every view. Here, you have to take care not to draw with multiple threads at the same time (i.e. lock the context somehow, as explained on this answer. And you have to re-bind the frame- and render-buffers every time on each frame.
Or, you can use different contexts for each view. Then, you have to re-set the context on each frame, because other display links, could (and would, as in my case) cause your OpenGL calls to use the wrong data. Also, there is no need for re-binding frame- and render-buffers since your context is preserved.
Also, call glFlush() after each frame, to tell the GPU to finish rendering each frame fully.
In my case (the second one), the code for rendering each frame (in iOS) looks like :
- (void) drawFrame:(CADisplayLink*)displayLink {
// Set current context, assuming _context
// is the class ivar for the OpenGL Context
[EAGLContext setCurrentContext:_context]
// Clear whatever you want
glClear (GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
// Do matrix stuff
...
glUniformMatrix4fv (...);
// Set your viewport
glViewport (0, 0, self.frame.size.width, self.frame.size.height);
// Bind object buffers
glBindBuffer (GL_ARRAY_BUFFER, _vertexBuffer);
glVertexAttribPointer (_glVertexPositionSlot, 3, ...);
// Draw elements
glDrawArrays (GL_TRIANGLES, 0, _currentVertexCount);
// Discard unneeded depth buffer
const GLenum discard[] = {GL_DEPTH_ATTACHMENT};
glDiscardFramebufferEXT (GL_FRAMEBUFFER, 1, discard);
// Present render buffer
[_context presentRenderbuffer:GL_RENDERBUFFER];
// Unbind and flush
glBindBuffer (GL_ARRAY_BUFFER, 0);
glFlush();
}
EDIT
I'm going to edit this answer, since I found out that running multiple CADisplayLinks could cause some issues. You have to make sure to set the frameInterval property of your CADisplayLink instance to something other than 0 or 1. Else, the run loop will only have time to call the first render method, and then it'll call it again, and again. In my case, that was why only one object was moving. Now, it's set to 3 or 4 frames, and the run loop has time to call all the render methods.
This applies only to the application running on the device. The simulator, being very fast, doesn't care about such things.
It gets tricky when you want multiple UIViews that are openGLViews,
on this site you should be able to read all about it: Using multiple openGL Views and uikit

XNA beginner question about draw method

I understand that I have to draw everything in draw(), and it's looping continuously.
But I don't want to draw texture again and again, for example I want to create a texture, draw something to texture (not spritebatch). than I will only draw that texture in draw().
Is it possible?
What can I use?
You have to draw again and again, as in short, if you dont it wont show. A wise man once wrote in a windows development book
Ask not why the text on your windows has to be constantly drawn, ask why it never used to be in DOS/Unix command line.
If something is placed over the area you're drawing too, and you dont redraw it, it just simply wont be there. You need to keep drawing it for it to be sustained on screen. Its done very quickly and wont hurt anything (especially if you're thinking in terms of background)
Not drawing it again is a performance optimisation. You should only do that if you really need to.
If you do need to do this, create a render target, draw your scene to the render target, and then draw your render target to the screen each frame (using SpriteBatch makes this easy) instead of your scene.
Take a look at this question about caching drawing using render targets.

Why isn't Quartz double buffering my drawInContext()?

I am rendering a simple line drawing (a line with some text in the middle) in a CALayer subclass via drawInContext(). I update this layer as the user is performing a gesture by calling setNeedsDisplay on it. The effect that I am seeing is what I might expect if there were no double buffering going on... i.e. I see parts of new rendering overlapping parts of old rendering. When I stop updating (complete the gesture) the system "catches up" and I always see the correct final result, but during the updates I see inconsistent results... This effect is not subtle and sometimes it is extreme... e.g. if I keep updating fast enough I can keep stale parts of the drawing on the screen for seconds while the new parts are drawing ahead...
I don't understand this at all. If Quartz is doing buffering then it seems that it is not blitting the result to the screen in its entirety or it is miscalculating the affected area.
Things I've tried:
1) I am disabling implicit animations and doing all of the drawing within a CATransaction
2) I am not making a mistake in my drawing... It's literally just two lines with some text in between... there is no way that I'm rendering the intermediate artifacts.
3) I have tried limiting the rate of updates by skipping most of them... but even at the lower rate I see artifacts until I stop updating and let the system catch up.
4) BTW, this happens identically in the simulator and on the device (iPad).
Is it necessary for me to draw into an offscreen buffer myself and copy it to the screen in its entirety? I thought that I had read that Quartz does this for me.
Update:
As usual, after hours of banging my head against the wall I find the (partial) answer 5 minutes after posting the question. I realized that I was using a CATiledLayer in order to get my layer re-rendered on zoom. If I switch it back to a regular CALayer the glitches go away. So I guess what I am seeing artifacts of the separate tiles rendering. Now I am trying to figure out how to deal with this...
So, it turns out that I had three problems:
1) CATiledLayer explicitly fades in new tile content with a default time of 0.25 seconds... This was causing havoc with my drawing. I overrode this in my CATiledLayer subclass:
+ (CFTimeInterval)fadeDuration {
NSLog(#"got fade duration");
return 0;
}
2) I also had to adjust the maximum tile size up (I set it to 1024x1024 though I don't know what size it is actually using).
3) I was making adjustments to my layer's frame periodically during the updates and that seemed to cause additional problems for the tiled layer. I am making changes to stop that.
With all of those changes the performance seems acceptable now.

Resources