Ok, so for anyone who's used it much - this should be a SUPER easy question.
I was just searching online for a way to use DirectX/Direct3D to take faster screenshots, and everyone was talking about GetFrontBufferData() and how wonderful it was.
I've pounded at it for a while but am starting to think they're using the term "screenshot" incorrectly... My call succeeds, but I never get a "screenshot".
So the question is, can you actually use GetFrontBufferData() to make a REAL screenshot of the whole desktop, or is it merely a way to read the pixels out of the front buffer WITHIN the confines of your d3d-device's drawing area?
(Upon success, I'd have expected to see the drawing area within my window do that old tv-in a tv-in a tv-in a tv-in a tv - kind of effect. I got nothing but black.)
Edit:
So, I was able to get a screenshot to work, but can't seem to put my image into the buffer for my app window.
At first, I thought this was because they were separate devices, but I've tried creating a second surface on the correct device and then manually copying the contents over. (Although I might be copying it wrong ((not important right now)) the call to the stretchrect still fails for the correct device.
Any idea why it won't let me apply this surface to my backbuffer???
D3DDISPLAYMODE d3dDisplayMode;
D3DPRESENT_PARAMETERS d3dPresentationParameters;
if( (d3d=Direct3DCreate9(D3D_SDK_VERSION))==NULL )
exit(1);
D3DCAPS9 d3dcps;
d3d->GetDeviceCaps(D3DADAPTER_DEFAULT,D3DDEVTYPE_HAL,&d3dcps);
DWORD targets = d3dcps.NumSimultaneousRTs;
// TODO: ^ make a way for user to select one from this and put it into i
DWORD i=D3DADAPTER_DEFAULT;
if( d3d->GetAdapterDisplayMode(i,&d3dDisplayMode)==D3DERR_INVALIDCALL )
exit(1);
ZeroMemory(&d3dPresentationParameters,sizeof(D3DPRESENT_PARAMETERS));//Fills a block of memory with zeros.
d3dPresentationParameters.Windowed = TRUE;
d3dPresentationParameters.Flags = D3DPRESENTFLAG_LOCKABLE_BACKBUFFER;
d3dPresentationParameters.BackBufferFormat = d3dDisplayMode.Format;//d3dDisplayMode.Format;//D3DFMT_A8R8G8B8;
d3dPresentationParameters.BackBufferCount = 1;
d3dPresentationParameters.BackBufferHeight = d3dDisplayMode.Height;
d3dPresentationParameters.BackBufferWidth = d3dDisplayMode.Width;
d3dPresentationParameters.MultiSampleType = D3DMULTISAMPLE_NONE;
d3dPresentationParameters.MultiSampleQuality = 0;
d3dPresentationParameters.SwapEffect = D3DSWAPEFFECT_DISCARD;
d3dPresentationParameters.hDeviceWindow = hwDesktop;
d3dPresentationParameters.PresentationInterval = D3DPRESENT_INTERVAL_DEFAULT;
d3dPresentationParameters.FullScreen_RefreshRateInHz = D3DPRESENT_RATE_DEFAULT;
if( d3d->CreateDevice(i,D3DDEVTYPE_HAL,hwDesktop,D3DCREATE_SOFTWARE_VERTEXPROCESSING,&d3dPresentationParameters,&d3dcdev) != D3D_OK )
exit(1);
if( d3dcdev->CreateOffscreenPlainSurface(d3dDisplayMode.Width,d3dDisplayMode.Height,D3DFMT_A8R8G8B8,D3DPOOL_SYSTEMMEM,&sfScrn, NULL) != D3D_OK )
exit(1);
if( d3dcdev->GetFrontBufferData(0,sfScrn) != D3D_OK)
exit(1);
// we now have a screenshot in sfScrn.
// let's render it to a separate device in our app window!
d3dPresentationParameters.hDeviceWindow = hwDrawArea;
if( d3d->CreateDevice(i,D3DDEVTYPE_HAL,hwDrawArea,D3DCREATE_SOFTWARE_VERTEXPROCESSING,&d3dPresentationParameters,&d3drdev) != D3D_OK )
exit(1);
d3drdev->GetBackBuffer(0,0,D3DBACKBUFFER_TYPE_MONO,&sfBackBuffer);
if( d3drdev->CreateOffscreenPlainSurface(d3dDisplayMode.Width,d3dDisplayMode.Height,D3DFMT_A8R8G8B8,D3DPOOL_SYSTEMMEM,&sfTransfer, NULL) != D3D_OK )
exit(1);
D3DLOCKED_RECT lockedRectScrn, lockedRectTransfer;
ZeroMemory(&lockedRectScrn, sizeof(D3DLOCKED_RECT));
ZeroMemory(&lockedRectTransfer, sizeof(D3DLOCKED_RECT));
if(sfScrn->LockRect(&lockedRectScrn,NULL,D3DLOCK_READONLY) != D3D_OK
|| sfTransfer->LockRect(&lockedRectTransfer,NULL,0) != D3D_OK )
exit(1);
memcpy((BYTE*)lockedRectTransfer.pBits,(BYTE*)lockedRectScrn.pBits,lockedRectScrn.Pitch*d3dDisplayMode.Height);
sfScrn->UnlockRect();
sfTransfer->UnlockRect();
while(true)
{
if(d3drdev != NULL)
{
d3drdev->Clear(0,NULL,D3DCLEAR_TARGET,D3DCOLOR_XRGB(75,0,0),1.0f,0);
if(D3D_OK!=d3drdev->StretchRect(sfTransfer,NULL,sfBackBuffer,NULL,D3DTEXF_NONE))
{
MessageBox(NULL,"failed to use stretchrect","",0);
exit(1);
}
if(d3drdev->BeginScene())
{
d3drdev->EndScene();
}
d3drdev->Present(NULL,NULL,NULL,NULL);
}
}
Edit2:
Oh! So apparently you can't use StretchRect() on surfaces that are in D3DPOOL_SYSTEMMEM, but you must use GetFronBufferData() on D3DPOOL_SYSTEMMEM.
Yes. you can capture the Windows desktop using GetFrontBufferData. As proof I can offer this recent question where the poster had it working, except not when the desktop was using 16-bit colour. It might give you some insight in how to use it correctly.
But no, that's the not the "true purpose" of GetFrontBufferData. It's real purpose is to allow Direct3D games to capture screenshots of the game itself regardless of whether the game is windowed or fullscreen, and most importantly, whether or not the game is using multisampling or not.
GetFrontBufferData isn't designed to be a better method to take screenshots of the Windows desktop. It might happen to be faster than other methods, but that's not why it exists.
Related
I am using iOs default PrinterToPrint in Xamarin to print without showing dialog to choose printer but then also it's showing one dialog which says printing to [PRINTER NAME]. Is there anyway to hide the dialog as well. Like complete silent print functionality?
I am not its possible but I have seen some apps which do that and I am not sure whether they are using the same function or not.
Thanks in advance.
Update:
UIPrinterPickerController comes from UIKit and as such there is no way to push the "printing" process to the background and off the main UI thread.
In the current UIPrintInteractionController.PrintToPrinter implementation (currently up to iOS 10.3 B4) there is no exposed way to disable the print progress (Connecting, Preparing, etc...) alart/dialog (w/ Cancel button) or to modify its appearance.
This interface is high level wrapper using AirPrint and thus Internet Print Protocol (IPP) at a lower level to preform the actual printing, job queue monitoring on the printer, etc... IPP is not currently exposed as a publicly available framework within iOS...
Programs that allow background printing are not using UIPrintInteractionController to do the printing. Most do use UIPrinterPickerController to obtain a UIPrinter selection from the user, but then use the UIPrinter.Url.AbsoluteUrl to "talk" directly to the printer via HTTP/HTTPS Post/Get. Depending upon the printers used, TCP-based sockets are also an option vs. IPP and even USB/serial for direct connected printers.
Re: https://en.wikipedia.org/wiki/Internet_Printing_Protocol
Original:
Pick a Printer:
if (allowUserToSelectDifferentPrinter || printerUrl == null)
{
UIPrinter uiPrinter = printerUrl != null ? null as UIPrinter : UIPrinter.FromUrl(new NSUrl(printerUrl));
var uiPrinterPickerController = UIPrinterPickerController.FromPrinter(uiPrinter);
uiPrinterPickerController.Present(true, (printerPickerController, userDidSelect, error) =>
{
if (userDidSelect)
{
uiPrinter = uiPrinterPickerController?.SelectedPrinter;
printerUrl = uiPrinter.Url.AbsoluteUrl.ToString();
Console.WriteLine($"Save this UIPrinter's Url string for later use: {printerUrl}");
}
});
}
Print using UIPrintInteractionController with an existing UIPrinter:
if (printerUrl != null)
{
// re-create a UIPrinter from a saved NSUrl string
var uiPrinter = UIPrinter.FromUrl(new NSUrl(printerUrl));
var printer = UIPrintInteractionController.SharedPrintController;
printer.ShowsPageRange = false;
printer.ShowsNumberOfCopies = false;
printer.ShowsPaperSelectionForLoadedPapers = false;
var printInfo = UIPrintInfo.PrintInfo;
printInfo.OutputType = UIPrintInfoOutputType.General;
printInfo.JobName = "StackOverflow Print Job";
var textFormatter = new UISimpleTextPrintFormatter("StackOverflow Rocks")
{
StartPage = 0,
ContentInsets = new UIEdgeInsets(72, 72, 72, 72),
MaximumContentWidth = 6 * 72,
};
printer.Delegate = new PrintInteractionControllerDelegate();
printer.PrintFormatter = textFormatter;
printer.PrintToPrinter(uiPrinter, (printInteractionController, completed, error) =>
{
if ((completed && error != null))
{
Console.WriteLine($"Print Error: {error.Code}:{error.Description}");
PresentViewController(
UIAlertController.Create("Print Error", "Code: {error.Code} Description: {error.Description}", UIAlertControllerStyle.ActionSheet),
true, () => { });
}
printInfo?.Dispose();
uiPrinter?.Dispose();
uiPrinter.
});
}
else
{
Console.WriteLine("User has not selected a printer...printing disabled");
}
I know this is a somewhat old thread but I had been struggling with implementing a silent printing in iOS for one of my customers and I finally came across an acceptable solution that is very easy to implement.
As mentioned in the accepted answer there is no way to get rid of the popup that displays printing progress. Yet there is a way of hiding it. You can simply change the UIWindowLevel of your key window to UIWindowLevel.Alert + 100. This will guarantee your current window will display above ANY alert view.
Be careful though, as I mentioned, it will be displayed over ANY alert view after the level has been changed. Luckily you can just switch this level back to "Normal" to get the original behavior.
So to recap my solution. I use UIPrintInteractionController.PrintToPrinter in order to print directly to a printer object I created using UIPrinter.FromUrl (this is Xamarin.iOS code btw). Before doing so, I adjust my window level to alert + 100 and once printing is complete I reset my window level to "Normal". Now my printing happens without any visual feedback to my user.
Hope this helps somebody!
I am not sure if it is Xamarin specific or a native Problem, too.
I am creating my ViewRenderer and in OnElementChanged my UIImageView.
base.OnElementChanged(e);
Foundation.NSError error;
var session = AVFoundation.AVAudioSession.SharedInstance();
session.SetCategory(AVFoundation.AVAudioSession.CategoryPlayAndRecord, out error);
if (error != null)
{
ClientLogger.Instance.Log("Error im MediaViewRenderer creating AV session, error code: " + error.Code, ClientLogger.LogLevel.Error);
}
//_control = e.NewElement as CustomMediaView;
UIKit.UIImageView surface = new UIKit.UIImageView();
if (surface != null)
{
this.SetNativeControl(surface);
I create my videolayer if it is null and set bound and Frames each time I render:
if (_surface != null)
{
if (_videoLayer == null && IsRunning)
{
_videoLayer = new AVSampleBufferDisplayLayer();
_videoLayer.VideoGravity = AVLayerVideoGravity.ResizeAspect.ToString();
_timeBase = new CMTimebase(CMClock.HostTimeClock);
_videoLayer.ControlTimebase = _timeBase;
_videoLayer.ControlTimebase.Time = CMTime.Zero;
_videoLayer.ControlTimebase.Rate = 1.0;
_surface.Layer.AddSublayer(_videoLayer);
}
if (_videoLayer != null)
{
//if (_videoLayer.VisibleRect == null || _videoLayer.VisibleRect.Height == 0 || _videoLayer.VisibleRect.Width == 0)
// ClientLogger.Instance.Log("Error iOS H264Decoder rect", ClientLogger.LogLevel.Error);
_videoLayer.Frame = _surface.Frame;
_videoLayer.Bounds = _surface.Bounds;
}
I receive my RTP stream and decode and Display my Video like it is descriped here:
How to use VideoToolbox to decompress H.264 video stream
If I want to stop the Video, I set the videolayer to null, later the surafce too.
_videoLayer.Flush();
_videoLayer.Dispose();
_videoLayer = null;
_surface.Dispose();
_surface = null;
That works great and gives me a nice H264 Video for around 15 times.
And after that it Shows a blank Background only. No Video visible. The Decoder works fine and seems to render, Surface and videolayer are not null.
There seems to be no Memory hole or at least not of the size it could be a Problem.
Happens on both iOS 9 and 10.
I think there is something wrong with the videolayer?
Any idea why it works around 15 times only?
Thanks a lot for some help or ideas!
Since you don't provide all of the code for your "stop the video" process, I'm going to assume that you are not calling the removeFromSuperlayer method of your videoLayer property and you are not calling the removeFromSuperview method of your surface property?
This will result in those objects still being present in the view hierarchy and the layer tree, and very likely still holding onto lower-level VT resources. You need to remove all references to those objects by removing them from the view hierarchy and layer tree.
It would be incredibly useful for a project I am working on that if, when the iOS device is turned on and boots up, I can save a timestamp of that occurrence. Is there any code I can run that will store this variable? If not, is there a way I can check without opening the app if an iPhone is on at a certain time?
Thanks!
Sorry for the confusion:
I want to make an app that has the capability to remember the last time an iOS device was turned on.
You can read the kern.boottime sysctl, which tells you when the system booted. I don't believe it would throw you out of the app store in this case:
#import <sys/sysctl.h>
- (time_t)bootTime
{
struct timeval boottime;
int item[2] = { CTL_KERN, KERN_BOOTTIME };
size_t size = sizeof(boottime);
int st = sysctl(item, 2, &boottime, &size, 0, 0);
if (st < 0)
return -1;
return boottime.tv_sec;
}
No this is not possibe in iOS.
I have recently started using Microsoft Kinect for Windows SDK to program some stuff using the Kinect the device.
I am busting my ass to find a way to detect whether a certain hand is closed or opened.
I saw the Kinect for Windows Toolkit but the documentation is none existent and I can't find a way to make it work.
Does anyone knows of a simple way to detect the hand's situation? even better if it doesn't involve the need to use the Kinect toolkit.
This is how I did it eventually:
First things first, we need a dummy class that looks somewhat like this:
public class DummyInteractionClient : IInteractionClient
{
public InteractionInfo GetInteractionInfoAtLocation(
int skeletonTrackingId,
InteractionHandType handType,
double x,
double y)
{
var result = new InteractionInfo();
result.IsGripTarget = true;
result.IsPressTarget = true;
result.PressAttractionPointX = 0.5;
result.PressAttractionPointY = 0.5;
result.PressTargetControlId = 1;
return result;
}
}
Then, in the main application code we need to announce about the interactions events handler like this:
this.interactionStream = new InteractionStream(args.NewSensor, new DummyInteractionClient());
this.interactionStream.InteractionFrameReady += InteractionStreamOnInteractionFrameReady;
Finally, the code to the handler itself:
private void InteractionStreamOnInteractionFrameReady(object sender, InteractionFrameReadyEventArgs e)
{
using (InteractionFrame frame = e.OpenInteractionFrame())
{
if (frame != null)
{
if (this.userInfos == null)
{
this.userInfos = new UserInfo[InteractionFrame.UserInfoArrayLength];
}
frame.CopyInteractionDataTo(this.userInfos);
}
else
{
return;
}
}
foreach (UserInfo userInfo in this.userInfos)
{
foreach (InteractionHandPointer handPointer in userInfo.HandPointers)
{
string action = null;
switch (handPointer.HandEventType)
{
case InteractionHandEventType.Grip:
action = "gripped";
break;
case InteractionHandEventType.GripRelease:
action = "released";
break;
}
if (action != null)
{
string handSide = "unknown";
switch (handPointer.HandType)
{
case InteractionHandType.Left:
handSide = "left";
break;
case InteractionHandType.Right:
handSide = "right";
break;
}
if (handSide == "left")
{
if (action == "released")
{
// left hand released code here
}
else
{
// left hand gripped code here
}
}
else
{
if (action == "released")
{
// right hand released code here
}
else
{
// right hand gripped code here
}
}
}
}
}
}
SDK 1.7 introduces the interaction concept called "grip". You read about all the KinectInteraction concepts at the following link: http://msdn.microsoft.com/en-us/library/dn188673.aspx
The way Microsoft has implemented this is via an event from an KinectRegion. Among the KinectRegion Events are HandPointerGrip and HandPointerGripRelease, which fire at the appropriate moments. Because the event is coming from the element the hand is over you can easily take appropriate action from the event handler.
Note that a KinectRegion can be anything. The base class is a ContentControl so you can place something as simple as an image to a complex Grid layout within the region to be acted on.
You can find an example of how to use this interaction in the ControlBasics-WPF example, provided with the SDK.
UPDATE:
KinectRegion is simply a fancy ContentControl, which in turn is just a container, which can have anything put inside. Have a look at the ControlBasics-WPF example, at the Kinect for Windows CodePlex, and do a search for KinectRegion in the MainWindow.xaml file. You'll see that there are several controls inside it which are acted upon.
To see how Grip and GripRelease are implemented in this example, it is best to open the solution in Visual Studio and do a search for "grip". They way they do it is a little odd, in my opinion, but it is a clean implementation that flows very well.
For what i know Microsoft kinect for windows SDK does not best to detect open and close hands. Microsoft provides tracking of 20 body parts and does not include the fingers of the hand. You can take advantage of the kinect interactions for that in an inderect way. This tutorial shows how:
http://dotneteers.net/blogs/vbandi/archive/2013/05/03/kinect-interactions-with-wpf-part-iii-demystifying-the-interaction-stream.aspx
But i think the best solution when tracking finger movements would be using OpenNI SDK.
Some of the MiddleWares of OpenNI allow finger tracking.
You can use something like this
private void OnHandleHandMove(object source, HandPointerEventArgs args)
{
HandPointer ptr = args.HandPointer;
if (ptr.HandEventType==HandEventType.Grip)
{
// TODO
}
}
I'm trying to print a label with a Star Micronics TSP650II printer in a monotouch app.
The problem is that session.OutputStream.HasSpaceAvailable() always returns false. What am I missing?
the C# code I have goes something like this (cut for simplicity):
var manager = EAAccessoryManager.SharedAccessoryManager;
var starPrinter = manager.ConnectedAccessories.FirstOrDefault (p => p.Name.IndexOf ("Star") >= 0); // this does find the EAAccessory correctly
var session = new EASession (starPrinter, starPrinter.ProtocolStrings [0]); // the second parameter resolves to "jp.star-m.starpro"
session.OutputStream.Schedule (NSRunLoop.Current, "kCFRunLoopDefaultMode");
session.OutputStream.Open ();
byte[] toSend = GetInitData(); // this comes from another project where the same printer with ethernet cable was used in a windows environment and worked, not null for sure
if (session.OutputStream.HasSpaceAvailable()) {
int bytesWritten = session.OutputStream.Write (toSend, (uint)stillToSend.Length);
if (bytesWritten < 0) {
Debug.WriteLine ("ERROR WRITING DATA");
} else {
Debug.WriteLine("Some data written, ignoring the rest, just a test");
}
} else
Debug.WriteLine ("NO SPACE"); // THIS ALWAYS PRINTS, the output stream is never ready to take any output
UPDATE:
I was able to work-around this problem by binding Star Micronics iOS SDK to my project, but that's less than ideal as it adds 700K to the package for something that should work without that binding.
UPDATE 2:
I've been getting requests for the binding code. I still strongly recommend you try to figure out the bluetooth connectivity and not use the binding but for those who are brave enough, here it is.
This is Kale Evans, Software Integration Engineer at Star Micronics.
Although Apple's EADemo doesn't show this, the following piece of code below is important for printing to EAAccessory.(Note, below code is Objective-C example).
if ([[_session outputStream] hasSpaceAvailable] == NO)
{
[[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:0.1]];
}
This gives OS time to process all input sources.
You say this does find the EAAccessory correctly
Could this be the reason the OutputStream returns false if the session is actually null?
Best Regards,
Star Support