How to set up GCController valueChangeHandler Properly in Xcode? - ios

I have successfully connected a steel series Nimbus dual analog controller to use for testing in both my iOS and tvOS apps. But I am unsure about how to properly set up the valueChangeHandler portion of my GCController property.
I understand so far that there are microGamepad, gamepad and extendedGamepad classes of controllers and the differences between them. I also understand that you can check to see if the respective controller class is available on the controller connected to your device.
But now I am having trouble setting up valueChangeHandler because if I set the three valueChangeHandler portions like so, then only the valueChangeHandler that works is the last one that was loaded in this sequence:
self.gameController = GCController.controllers()[0]
self.gameController.extendedGamepad?.valueChangedHandler = { (gamepad, element) -> Void in
if element == self.gameController.extendedGamepad?.leftThumbstick {
//Never gets called
}
}
self.gameController.gamepad?.valueChangedHandler = { (gamepad, element) -> Void in
if element == self.gameController.gamepad?.dpad {
//Never gets called
}
}
self.gameController.microGamepad?.valueChangedHandler = { (gamepad, element) -> Void in
if element == self.gameController.microGamepad?.dpad {
//Gets called
}
}
If I switch them around and call self.gameController.extendedGamepad.valueChangeHandler... last, then those methods will work and the gamepad and microGamepad methods will not.
Anyone know how to fix this?

You test which profile is available and depending on the profile, you set the valueChangedHandler.
It's important to realise that the extendedGamepad contains most functionality and the microGamepad least (I think the microGamepad is only used for the AppleTV remote). Therefore the checks should be ordered differently. An extendedGamepad has all functionality of the microGamepad + additional controls so in your code the method would always enter the microGamepad profile.
Apple uses the following code in the DemoBots example project:
private func registerMovementEvents() {
/// An analog movement handler for D-pads and movement thumbsticks.
let movementHandler: GCControllerDirectionPadValueChangedHandler = { [unowned self] _, xValue, yValue in
// Code to handle movement here ...
}
#if os(tvOS)
// `GCMicroGamepad` D-pad handler.
if let microGamepad = gameController.microGamepad {
// Allow the gamepad to handle transposing D-pad values when rotating the controller.
microGamepad.allowsRotation = true
microGamepad.dpad.valueChangedHandler = movementHandler
}
#endif
// `GCGamepad` D-pad handler.
// Will never enter here in case of AppleTV remote as the AppleTV remote is a microGamepad
if let gamepad = gameController.gamepad {
gamepad.dpad.valueChangedHandler = movementHandler
}
// `GCExtendedGamepad` left thumbstick.
if let extendedGamepad = gameController.extendedGamepad {
extendedGamepad.leftThumbstick.valueChangedHandler = movementHandler
}
}

Related

Lazy rendering print pages on iOS

I need to print custom UI in a cross-platform application. On Windows, the work flow is like:
Choose what to print
Open the Windows Print dialog/print preview. This only queries for print pages as they are needed.
Begin printing and report progress as each page prints.
I am trying to replicate this workflow on iOS, but it seems that all the print pages need to be print-ready before opening the iOS print dialog (UIPrintInteractionController). This is a big problem because the user may be printing a lot of pages, and each page can take awhile to prepare for printing. Therefore it takes too long to prepare and open the UIPrintInteractionController.
Is there any way on iOS to have it lazily query for the print pages one (or even a few) at a time as they are being previewed or printed? Or do they all have to be prepared ahead of presenting the UIPrintInteractionController?
EDIT: It's a major problem when printing many pages because it can easily run out of memory holding that many UIImages in memory at the same time.
I tried using the PrintItems property with UIImages, as well as a UIPrintPageRenderer. I am using Xamarin.iOS so pardon the C# syntax, but you get the idea. An answer in Swift or Objective C would be fine. Here is my sample pseudo-code:
//Version 1
public void ShowPrintUI_UsingPrintingItems()
{
UIPrintInfo printOptions = UIPrintInfo.PrintInfo;
InitializePrintOptions(printOptions);
_printer.PrintInfo = printOptions;
var printRect = new CGRect(new CGPoint(), _printer.PrintPaper.PaperSize);
//Each of these is horribly slow.
//Can I have it render only when the page is actually needed (being previewed or printed) ?
for (uint i = 0; i < numPrintPages; i++)
{
_printPages[i] = RenderPrintPage(i, printRect);
}
_printer.PrintingItems = _printPages;
_printer.Present(true, (handler, completed, error) =>
{
//Clean up, etc.
});
}
//Version 2
public void ShowPrintUI_UsingPageRenderer()
{
UIPrintInfo printOptions = UIPrintInfo.PrintInfo;
InitializePrintOptions(printOptions);
_printer.PrintInfo = printOptions;
//This still draws every page in the range and is just as slow
_printer.PrintPageRenderer = new MyRenderer();
_printer.Present(true, (handler, completed, error) =>
{
//Clean up, etc.
});
}
private class MyRenderer : UIPrintPageRenderer
{
public MyRenderer(IIosPrintCallback callback)
{
_callback = callback;
}
public override void DrawPage(nint index, CGRect pageRect)
{
DrawPrintPage(i, printRect);
}
public override nint NumberOfPages => _numPrintPages;
}

CFNotificationCenter repeating events statements

I have been working on an enterprise iOS/Swift (iOS 11.3, Xcode 9.3.1) app in which I want to be notified if the screen changes (goes blank or becomes active) and capture the events in a Realm databse. I am using the answer from tbaranes in detect screen unlock events in IOS Swift and it works, but I find added repeats as the screen goes blank and becomes active:
Initial Blank: a single event recorded
Initial Re-activiation: two events are recorded
Second Blank: two events are recorded
Second Re-act: three events are recorded
and this cycle of adding an additional event recording each cycle.
This must be something in the code (or missing from the code) that is causing an additive effect but I can’t find it. And, yes, the print statements show the issue is not within the Realm database, but are actual repeated statements.
My code is below. Any suggestions are appreciated.
CFNotificationCenterAddObserver(CFNotificationCenterGetDarwinNotifyCenter(), //center
Unmanaged.passUnretained(self).toOpaque(), // observer
displayStatusChangedCallback, // callback
"com.apple.springboard.hasBlankedScreen" as CFString, // event name
nil, // object
.deliverImmediately)
}
private let displayStatusChangedCallback: CFNotificationCallback = { _, cfObserver, cfName, _, _ in
guard let lockState = cfName?.rawValue as String? else {
return
}
let catcher = Unmanaged<AppDelegate>.fromOpaque(UnsafeRawPointer(OpaquePointer(cfObserver)!)).takeUnretainedValue()
catcher.displayStatusChanged(lockState)
print("how many times?")
}
private func displayStatusChanged(_ lockState: String) {
// the "com.apple.springboard.lockcomplete" notification will always come after the "com.apple.springboard.lockstate" notification
print("Darwin notification NAME = \(lockState)")
if lockState == "com.apple.springboard.hasBlankedScreen" {
print("A single Blank Screen")
let statusString = dbSource() // Realm database
statusString.infoString = "blanked screen"
print("statusString: \(statusString)")
statusString.save()
return
}

Is there a way to tell if a MIDI-Device is connected via USB on iOS?

I'm using CoreMIDI to receive messages from a MIDI-Keyboard via Camera Connection Kit on iOS-Devices. My App is about pitch recognition. I want the following functionality to be automatic:
By default use the microphone (already implemented), if a MIDI-Keyboard is connected use that instead.
It's could find out how to tell if it is a USB-Keyboard using the default driver. Just ask for the device called "USB-MIDI":
private func getUSBDeviceReference() -> MIDIDeviceRef? {
for index in 0..<MIDIGetNumberOfDevices() {
let device = MIDIGetDevice(index)
var name : Unmanaged<CFString>?
MIDIObjectGetStringProperty(device, kMIDIPropertyName, &name)
if name!.takeRetainedValue() as String == "USB-MIDI" {
return device
}
}
return nil
}
But unfortunately there are USB-Keyboards that use a custom driver. How can I tell if I'm looking at one of these? Standard Bluetooth- and Network-Devices seem to be always online. Even if Wifi and Bluetooth are turned of on the device (strange?).
I ended up using the USBLocationID. It worked with any device I tested so far and none of the users complained.But I don't expect many users to use the MIDI-Features of my app.
/// Filters all `MIDIDeviceRef`'s for USB-Devices
private func getUSBDeviceReferences() -> [MIDIDeviceRef] {
var devices = [MIDIDeviceRef]()
for index in 0..<MIDIGetNumberOfDevices() {
let device = MIDIGetDevice(index)
var list: Unmanaged<CFPropertyList>?
MIDIObjectGetProperties(device, &list, true)
if let list = list {
let dict = list.takeRetainedValue() as! NSDictionary
if dict["USBLocationID"] != nil {
devices.append(device)
}
}
}
return devices
}

Why does Example app using Photos framework use stopCachingImagesForAllAssets after each change?

So I'm looking at the Photos API and Apple's sample code here
https://developer.apple.com/library/ios/samplecode/UsingPhotosFramework/Introduction/Intro.html
and its conversion to swift here
https://github.com/ooper-shlab/SamplePhotosApp-Swift
I have integrated the code into my project so that a collectionView is successfully updating itself form the library as I take photos. There is one quirk: Sometimes cells are blank, and it seems to be connected to stopCachingImagesForAllAssets which Apple calls each time the library is updated at the end of photoLibraryDidChange delegate method.
I can remove the line and it fixes the problem, but surely there is a reason Apple put it there in the first place? I am concerned with memory usage.
// MARK: - PHPhotoLibraryChangeObserver
func photoLibraryDidChange(changeInstance: PHChange) {
// Check if there are changes to the assets we are showing.
guard let
assetsFetchResults = self.assetsFetchResults,
collectionChanges = changeInstance.changeDetailsForFetchResult(assetsFetchResults)
else {return}
/*
Change notifications may be made on a background queue. Re-dispatch to the
main queue before acting on the change as we'll be updating the UI.
*/
dispatch_async(dispatch_get_main_queue()) {
// Get the new fetch result.
self.assetsFetchResults = collectionChanges.fetchResultAfterChanges
let collectionView = self.pictureCollectionView!
if !collectionChanges.hasIncrementalChanges || collectionChanges.hasMoves {
// Reload the collection view if the incremental diffs are not available
collectionView.reloadData()
} else {
/*
Tell the collection view to animate insertions and deletions if we
have incremental diffs.
*/
collectionView.performBatchUpdates({
if let removedIndexes = collectionChanges.removedIndexes
where removedIndexes.count > 0 {
collectionView.deleteItemsAtIndexPaths(removedIndexes.aapl_indexPathsFromIndexesWithSection(0))
}
if let insertedIndexes = collectionChanges.insertedIndexes
where insertedIndexes.count > 0 {
collectionView.insertItemsAtIndexPaths(insertedIndexes.aapl_indexPathsFromIndexesWithSection(0))
}
if let changedIndexes = collectionChanges.changedIndexes
where changedIndexes.count > 0 {
collectionView.reloadItemsAtIndexPaths(changedIndexes.aapl_indexPathsFromIndexesWithSection(0))
}
}, completion: nil)
}
self.resetCachedAssets() //perhaps prevents memory warning but causes the empty cells
}
}
//MARK: - Asset Caching
private func resetCachedAssets() {
self.imageManager?.stopCachingImagesForAllAssets()
self.previousPreheatRect = CGRectZero
}
I was having the same result.
Here's what fixed the issue for me:
Since performBatchUpdates is asynchronous, the resetCachedAssets gets executed possibly while the delete/insert/reload is happening, or even between those.
That didn't sound nice to me. So I moved the line:
self.resetCachedAssets()
to the first line of the dispatch_async block.
I hope this helps you too.

How to detect open/closed hand using Microsoft Kinect for Windows SDK ver 1.7 C#

I have recently started using Microsoft Kinect for Windows SDK to program some stuff using the Kinect the device.
I am busting my ass to find a way to detect whether a certain hand is closed or opened.
I saw the Kinect for Windows Toolkit but the documentation is none existent and I can't find a way to make it work.
Does anyone knows of a simple way to detect the hand's situation? even better if it doesn't involve the need to use the Kinect toolkit.
This is how I did it eventually:
First things first, we need a dummy class that looks somewhat like this:
public class DummyInteractionClient : IInteractionClient
{
public InteractionInfo GetInteractionInfoAtLocation(
int skeletonTrackingId,
InteractionHandType handType,
double x,
double y)
{
var result = new InteractionInfo();
result.IsGripTarget = true;
result.IsPressTarget = true;
result.PressAttractionPointX = 0.5;
result.PressAttractionPointY = 0.5;
result.PressTargetControlId = 1;
return result;
}
}
Then, in the main application code we need to announce about the interactions events handler like this:
this.interactionStream = new InteractionStream(args.NewSensor, new DummyInteractionClient());
this.interactionStream.InteractionFrameReady += InteractionStreamOnInteractionFrameReady;
Finally, the code to the handler itself:
private void InteractionStreamOnInteractionFrameReady(object sender, InteractionFrameReadyEventArgs e)
{
using (InteractionFrame frame = e.OpenInteractionFrame())
{
if (frame != null)
{
if (this.userInfos == null)
{
this.userInfos = new UserInfo[InteractionFrame.UserInfoArrayLength];
}
frame.CopyInteractionDataTo(this.userInfos);
}
else
{
return;
}
}
foreach (UserInfo userInfo in this.userInfos)
{
foreach (InteractionHandPointer handPointer in userInfo.HandPointers)
{
string action = null;
switch (handPointer.HandEventType)
{
case InteractionHandEventType.Grip:
action = "gripped";
break;
case InteractionHandEventType.GripRelease:
action = "released";
break;
}
if (action != null)
{
string handSide = "unknown";
switch (handPointer.HandType)
{
case InteractionHandType.Left:
handSide = "left";
break;
case InteractionHandType.Right:
handSide = "right";
break;
}
if (handSide == "left")
{
if (action == "released")
{
// left hand released code here
}
else
{
// left hand gripped code here
}
}
else
{
if (action == "released")
{
// right hand released code here
}
else
{
// right hand gripped code here
}
}
}
}
}
}
SDK 1.7 introduces the interaction concept called "grip". You read about all the KinectInteraction concepts at the following link: http://msdn.microsoft.com/en-us/library/dn188673.aspx
The way Microsoft has implemented this is via an event from an KinectRegion. Among the KinectRegion Events are HandPointerGrip and HandPointerGripRelease, which fire at the appropriate moments. Because the event is coming from the element the hand is over you can easily take appropriate action from the event handler.
Note that a KinectRegion can be anything. The base class is a ContentControl so you can place something as simple as an image to a complex Grid layout within the region to be acted on.
You can find an example of how to use this interaction in the ControlBasics-WPF example, provided with the SDK.
UPDATE:
KinectRegion is simply a fancy ContentControl, which in turn is just a container, which can have anything put inside. Have a look at the ControlBasics-WPF example, at the Kinect for Windows CodePlex, and do a search for KinectRegion in the MainWindow.xaml file. You'll see that there are several controls inside it which are acted upon.
To see how Grip and GripRelease are implemented in this example, it is best to open the solution in Visual Studio and do a search for "grip". They way they do it is a little odd, in my opinion, but it is a clean implementation that flows very well.
For what i know Microsoft kinect for windows SDK does not best to detect open and close hands. Microsoft provides tracking of 20 body parts and does not include the fingers of the hand. You can take advantage of the kinect interactions for that in an inderect way. This tutorial shows how:
http://dotneteers.net/blogs/vbandi/archive/2013/05/03/kinect-interactions-with-wpf-part-iii-demystifying-the-interaction-stream.aspx
But i think the best solution when tracking finger movements would be using OpenNI SDK.
Some of the MiddleWares of OpenNI allow finger tracking.
You can use something like this
private void OnHandleHandMove(object source, HandPointerEventArgs args)
{
HandPointer ptr = args.HandPointer;
if (ptr.HandEventType==HandEventType.Grip)
{
// TODO
}
}

Resources