I'm developing an application using a VMR9 Allocator.
The allocator allow me to draw the directshow filter graph output to a texture.
I noticed that if I don't use the SwapEffect.Copy in exclusive mode my video rate is bellow 25 (the nominal rate).
I need to use SwapEffect.Discard in order to activate multisampling.
Is there any workaround to use vmr9 allocator with SwapEffect.Discard?
Related
I'm looking for a way to read the whole video memory that a video card outputs to a display. That includes also hardware accelerated output, video playback and output in fullscreen mode (that somehow I feel could be different from windowed mode).
In short: I want to be able to capture everything that is going to be represented on a display.
I suppose that IF that's possible it would be os-dependant. The targets I'm interested in are Windows OSX and Linux.
Do you have any hint?
For windows I guess you could take CamStudio, strip it down and use it to record the screen then do whatever you want with the output, other than that you could look into forensic kernel drivers for accessing RAM. It's not exactly as simple as a pointer pointing to the video memory anymore, haha.
Digital Rights Management, requested feature of Windows, attempts to block your access to blocks of graphics-card frame buffer memory. Using an open-source driver under Linux would seem to be the only way to access this memory, or as mentioned earlier, some 3rd party software that knows some back doors or hacks or ways to locate other program's frame buffer space.
Unless of course, you are trying to capture output from your own program (ie you are calling the video/graphics creation functions yourself), there are APIs to manipulate display frames in DirectX and OpenGL.
I think I found some resources that can help to capture the display memory in Windows
Fastest method of screen capturing
How to save backbuffer to file in DirectX 10?
http://betterlogic.com/roger/2010/07/fast-screen-capture/
I have a lab project that uses mainly PyAudio and to further understand its way of working I made some measurements, in this case time between callbacks (using callback mode).
I timed it, and got an interesting result
(#256 chunk size, 44.1k fs): 0.0099701;0.0000365;0.0000201;0.0201579
This pattern goes on and on.
Between two longer calls, we have two shorter calls and sometimes the longer call is shorter (mind you I don't do anything else in the program than time the callbacks).
If we average this out we get our desired callback time:
1/44100 * 256 (roughly 5.8ms)
Here is my measurement visualized:
So can someone explain what exactly happens here under the hood?
What happens under the hood in PortAudio is dependent on a number of factors, including:
Which native audio API PortAudio is talking to
What buffer size and latency parameters you passed to Pa_OpenStream()
The capabilities of the audio hardware and its drivers, including its supported buffer sizes, buffering model and timing characteristics.
Under some circumstances PortAudio will request larger buffers from the native audio API and then invoke the PortAudio user callback multiple times in quick succession. This can happen if you have selected a small callback buffer size and a long latency.
Another scenario is that the native audio API doesn't support the buffer size that you requested for your callback size (framesPerBuffer parameter to Pa_OpenStream()). In this case PortAudio will be forced to use a driver-supported buffer size and then "adapt" between that buffer size and your callback buffer size. This adaption process can cause irregular timing.
Yet another possibility is that the native audio API uses a large ring buffer. Each time PortAudio polls the native host API, it will work to fill the native ring buffer by calling your callback as many times as needed. In this case irregular timing is related to the polling rate.
The above are not the only possibilities.
One likely explanation of what is happening in your case is that PortAudio is calling your callback 3 times in fast succession (a guess would be that the native buffer size is 3x your callback buffer size), for one of the reasons above.
Another possibility is that the native audio subsystem is signalling PortAudio irregularly. This can happen if a system layer below PortAudio is doing similar kinds of buffering to what I described above. I have seen this happen with DirectSound on Windows 7 for example. ASIO4ALL drivers will exhibit +/- 1ms jitter (which is not what you're seeing).
You can try reducing the requested stream latency to 0 and see if that changes the result. This will force double-buffering, which may or may not produce stable output. Another thing to try is to use the paFramesPerBufferUnspecified parameter, which will cause the callback to be called with the native buffer size -- then you can observe whether there is greater periodicity, what that buffer size is, and also whether the buffer size varies from callback to callback.
You didn't say which operating system and host API you're targetting, so it's hard to give more specific details than the above.
The internal buffering models used by the various PortAudio host API backends are described in some detail on the PortAudio wiki.
To answer a related question: why is it like this? Aside from the cases where it is a function of the lower layers of the native audio subsystem, or the buffer adaption process, it is often a result of specifying a large suggested latency to Pa_OpenStream(). Some PortAudio host APIs will relax the buffer periodicity if the specified latency is very high, in order to reduce system load that would be caused by high-frequency timer callbacks.
I have a little wee of a problem developing one of my programs in C++ (Visual studio) - Right now im struggling with connection of multiple webcams (connected via usb cables), creating for each of them separate thread to capture frames, and separate frame for processing image.
I use OpenCV to process frames, but the problem is that i dont get a peak of webcam possibilities (it supports 25 fps, i get only 18) is there some library that i could use to get frames, than process them with OpenCV that would made frames be captured faster?
I was researching a bit and the most popular way is to use directshow to get frames and OpenCV to process them.
Do You agree? Or do You have another solution?
I wouldn't be offended by some links :)
DirectShow is only used, if you open your capture using the
CV_CAP_DSHOW flag, like:
VideoCapture capture( CV_CAP_DSHOW + 0 ); // 0,1,2, your cam id there
(without it, it defaults to vfw )
the capture already runs in a separate thread, so wrapping it with more threads won't give you any gain.
another obstacle with multiple cams is the usb bandwidth, so if you got ports on the back & the front of your machine, dont plug all your cams into the same port/controller else you just saturate it
OpenCV uses DirectShow. Using DirectShow (primary video capture API in Windows) directly will obviously get you par or better performance (and even more likely so if OpenCV is set to use Video for Windows). USB cams typically hit USB bandwidth and hence frame rate limit, using DirectShow to capture in compressed formats or in formats with less bits/pixel is the way to reach higher frame rates within the same USB bandwidth limit.
Another typical problem causing low frame rates is slow synchronous processing delaying the capture. You typically identify this by putting trivial processing into the same capture loop and seeing higher FPS compared to processing-enabled operation.
If I run a 3D application (like a benchmark tool or game) using PIX and replay the capture later, is the replay actually calling the same API (and thus invoking the actual driver and GPU, rather than running a punt back or emulated 3D using CPU) the same way it was when running the original 3D application? I'm focusing only on the Direct3D API part.
Is there any other way I can do the capture, because for some application, PIX fails to capture them.
Is there a way for me to capture only a subset of the rendering, say only the middle 50 frames?
I am wondering how to create a pointer to the vbe video memory so I can display graphics from my 32-bit os kernel. Can anyone help me?
Download the VBE 3.0 standard (Wikipedia has all the links). Using it, write code to call function 2 to set a graphics mode, making sure you bitwise-OR the mode number with 4000h to tell the BIOS to use a flat linear frame buffer instead of windowed/segmented. Use function 1 to obtain the address of the video buffer (it should be in ModeInfoBlock.PhysBasePtr).
There should be plenty of reference code online for things like this.
I suggest to call VBE functions prior to going into protected mode as it will be harder to do afterwards.