I want to capture screen, but I have a window that is showing Video from capture card with directshow. In Some PCs the captured part in recorded screen is empty(dark) because of unknown reason. When I capture screen with DirectX the problem goes away. but I use a third party library (Microsoft expression encoder screen capture) the problem still remains. Is DirectShow buffer differ from desktop GDI buffer?How can I solve this problem?
Related
How can I resize texture2d of SharpDX? I'm using SharpDX to Duplicate the screen and I use MediaFoundation to encode the texture into a video file. My problem is when I open an application into fullscreen and has a different resolution from system resolution I got a blank screen on my recording. Is there a better way I can resize the texture before encoding to mediafoundation without suffering performance? I'm using hardware accelerated. Thanks.
It depends on how exactly you use Media Foundation, but you can use the Video Processor MFT explicitly. You need to add this MFT to the topology if you use IMFMediaSession/IMFTopology. Or initialize and process the samples with this MFT if you use Sink Writer. In this case you need to supply the DX manager to the MFT, using MFT_MESSAGE_SET_D3D_MANAGER.
This MFT is only available on Windows 8 and higher. So if you need to support an older Windows version you can use Video Resizer, but it is not hardware accelerated.
Another option to resize the texture would be to create a render target of the desired size and draw the texture to it. After that you need to feed that render target to the encoder.
I am using GPUImage's GPUImageVideoCamera initWithSessionPreset:cameraPosition: in order to display video from the rear facing camera on an iOS device (targeting iOS 7). This is filtered and displayed on a GPUImageView. Will not exceed AVCaptureSessionPreset640x480.
At any given moment in the app, I need to recall the past 5 seconds of unfiltered video captured from the rear-facing camera and instantly play this back on another (or the same) GPUImageView.
I can access CMSampleBufferRef via GPUImageVideoCamera's willOutputSampleBuffer: which is passed through from but I'm not sure how one goes about getting the most recent frames into memory in an efficient way such that they can be instantly, seamlessly played back.
I believe the solution is a Circular Buffer using something like TPCircularBuffer but I'm not sure that will work with a video stream. Also wanted to reference unanswered Buffering CMSampleBufferRef into a CFArray and Hold multiple Frames in Memory before sending them to AVAssetWriter as they closely resembled my original plan of attack until I started researching this.
I got a problem of the responding time of capturing an image from the still pin.
I have system running video stream at 320*480 from the capture pin and push a button to snap a 720P image from the still pin directly at the same time. But the responding time is too long, around 6 sec, which is not desirable.
I have try other video cap software support snapping a picture while video streaming, the responding time is similar.
I am wondering whether this a hardware problem or a software problem. And how the still pin capture is working actually.Is this from interpolation or change the resolution by hardware.
for example, the camera start at one resolution set keeps sensing and push the data to the buffer through the USB. is it possible for it immediately change to another resolution set and snap an image? is this why the system is taking picture slowly?
Or, is there a way to keep video streaming at high frame rate and snap a high resolution image immediately? No interpolation.
I am doing a project which has the function to snap a image from the video stream. The technology I use is DirectShow. And the responding time is not that long as yours. And the responding time has nothing to do with the streaming frame, according to my experience.
Usually a camera has its own default resolution. It is impossible for it mmediately change to another resolution set and snap an image. So that is not the reason.
Could you please show me some codes? And your camera's type ?
I have used the following method iOS4: how do I use video file as an OpenGL texture? to get video frames rendering in openGL successfully.
This method however seems to fall down when you want to scrub (jump to a certain point in the playback) as it only supplies you with video frames sequentially.
Does anyone know a way this behaviour can successfully be achieved?
One easy way to implement this is to export the video to a series of frames, store each frame as a PNG, and then "scrub" by seeing to a PNG at a specific offset. That gives you random access in the image stream at the cost of decoding the entire video first and holding all the data on disk. This would also involve decoding each frame as it is accessed, that would eat up CPU but modern iPhones and iPads can handle it as long as you are not doing too much else.
The application currently I am using has a main functionality to scan QR/Bar codes continuously using Zxing library (http://code.google.com/p/zxing/). For continuous frame capturing I used to initialize the AVCaptureSession and AVCaptureVideoOutput, AVCaptureVideoPreviewLayer described in the apple Q&A http://developer.apple.com/iphone/library/qa/qa2010/qa1702.html.
My problem is, when I used to run the camera preview, the image I can see through the Video device is much larger (1.5x) than the image we can see through the still camera of the iPhone. Our customer needs to hold the iPhone around 5cm distance from the bar code when he is scanning, but if you hold the iPhone to that parameter, the whole QR code won't be visible and the decoding fails.
Why is Video camera in iPhone 4 enlarges the image (by seeing through the AVCaptureVideoPreviewLayer) ?.
This is a function of the AVCaptureSession video preset, accessible by using the .sessionPreset property. For example, after configuring your captureSession, but before starting it, you would add
captureSession.sessionPreset = AVCaptureSessionPresetPhoto;
See the documentation here:
iOS Reference Document
The default preset for video is 1280x720 (I think) which is a lower resolution than the max supported by the camera. By using the "Photo" preset, you're getting the raw camera data.
You see the same behaviour with the built-in iPhone Camera app. Switch between still and video capture modes and you'll notice that the default zoom level changes. You see a wider view in still mode, whereas video mode zooms in a bit.
My guess is that continuous video capture needs to use a smaller area of the camera sensor to work optimally. If it used the whole sensor perhaps the system couldn't sustain 30 fps. Using a smaller area of the sensor gives the effect of "zooming in" to the scene.
I am answering my own question again. This was not answered even in Apple Dev forum, therefore I directly filed a technical support request from Apple and they have replied that this is a known issue and will be fixed and released with a future version. So there is nothing we can do more than waiting and see.