Sony has released a rather comprehensive API for wifi control of their selection of cameras here: https://developer.sony.com/downloads/camera-file/sony-camera-remote-api-beta-sdk/
I have been searching for what methods to use so I can manually control focus. I have been able to set the focus mode into "manual focus", but I don't see any methods for setting the focus point. There are several methods for controlling zoom, which I would expect the same to exist for focus -- Yet, this feature is seemingly implemented in the sony playmemories android application, so it must be possible, I think... but as it's a black box it's hard to tell what focus mode it's calling.
Could someone share what the methods are for manually focusing the camera? I am on a sony a7r with the lovely 90mm macro.
The beta SDK from March 15, 2016 provides no information on how to do this. This is a key feature which is directly blocking my ability to do effective deep focus stacking.
Deep focus stacking sounds great! But unfortunately, manually focus controlling is not provided by the APIs.
Once you set "Manual Focus" option, you can move focus by using focus ring on lens. (Don't ask me why Sony added such option to the API for Wi-Fi devices!)
Related
Trying to utilize a mouse for a novel input device in our specialized iPad app. Since iOS has HID drivers and mice are HID devices, is it possible to programmatically receive mouse movement deltas in code?
To be clear, I am not asking about an on-screen mouse cursor for iOS. I mean strictly in the sense of reading programmatic input in my own specific app.
I'm also not talking about MFI devices which require you to develop hardware, then submit to have that hardware certified by Apple. I'm referring again to a standard mouse using standard HID drivers which I believe are already part of the system (it recognizes HID keyboards already.)
So is something like this possible?
(This question is mostly applicable to those of us developing iPhone apps without access to an iPhone 7.)
I want to incorporate the new taptic feedback available with the iPhone 7 into my apps, and I want to make sure my uses of it align properly with how iOS uses them at a system level. Without a device I can't test this.
Apple provides a document describing the different kinds of feedback: https://developer.apple.com/ios/human-interface-guidelines/interaction/feedback/ Namely "Notification", "Impact", or "Selection".
For instance, in Mail.app, when you slide a cell to archive it, it gives taptic feedback. Which of those three above (and their corresponding "variation") does Mail.app use? I'm guessing "Selection" but may be wrong.
Bonus points for pulling down Notification Center or Control Center, as well as any others you can provide for reference, but the gestures in Mail.app would be an awesome start.
You should check out this article, it gives you an overview how UIFeedbackGenerator works. https://www.hackingwithswift.com/example-code/uikit/how-to-generate-haptic-feedback-with-uifeedbackgenerator
Alternatively, you can create a demo project and check out which feedback is best suited for your needs.
Edit:
It's the selection feedback for Mail app. The notification center uses multiple feedbacks depending upon the sliding. If you do it slowly, it's impact heavy and how if you do it a bit slowly, it's impact light and if you just slide it down immediately, it produces no feedback.
I'm trying to find out if there's a way to detect the "geotag" or "store-location" setting of the camera on an Android device. I don't mean the general location access setting which I'm very familiar with, but the more "hidden" setting when you use the camera.
Most users are not even aware it exists, and it's turned off by default, I would like to be able to tell the users that this setting is off so that they can turn it on if they want to, this way pictures will have all the EXIF data concerning their location.
I hope this has not been answered before on so, if its the case, I'm sorry about it and would you please link me to the right thread.
Each Android device usually ships with its own custom camera app, made by the manufacturer of that device. Each has their own UI and probably own way/place to store this setting, if it even exists for that device. So any answer to this question would be heavily device-dependent.
But even if you just restrict yourself to the AOSP camera app, which is the app used on the Nexus devices, there's no API for this. The app asks if you want to enable GPS tagging the first time the app is run, and after that the option to enable/disable geotagging can be found in the settings.
There's no way to confirm if that setting is on, since it's not part of any public or standard Android API. You might be able to do something with the accessibility API to read these settings, but it requires substantial permissions to do so (Accessibility service documentation here).
To extract the EXIF information from the files, you could consider an example similar to the updateExif example shown in this code snippet. This would enable you to get all the information including make, flash, focal length etc which is stored in the EXIF file.
Speaking to the Android 4.3 camera app that comes with Nexus devices, you can view and change the setting like this:
Open the camera app.
The white circle (rightmost of three icons at bottom of app screen) is used to bring up the main menu. If "store location" is turned off, that circular icon will be displayed with a small location icon with a line through it. If "store location" is turned on, there is no positive indication thereof. Tap it to continue if you want to change the setting.
Tap the settings icon (middle icon of five) to bring up "More Options" menu.
The "store location" icon is the leftmost of five. Tapping that icon toggles the setting (and dismisses the menu). If "store location" is off, that icon is displayed with a line through it, else without.
Does anyone know if there is a way to programmatically enable and read text in-app using the underlying Accessibility features available in iOS 5+?
To be clear, I am talking about the following feature (but of course doing this programmatically).
I am open to alternatives, but would prefer an Apple approved way to use this particular iOS 5 feature.
Unfortunately it's not possible currently. The "Speak Selection" is only available to users that enable it in Accessibility settings and only for highlighted text in apps.
For programmatic Text-to-Speech you can check out iphone-tts. It works pretty well, though one caveat is that it only supports the voices it comes with, it doesn't use the "Siri voice". You can tweak the pitch and speed to your liking, but you won't be able to match the built in voice for iOS.
I am implementing camera application using then example comes with blackberry plugin for eclipse named "CameraDemo" the problem is that when the screen loses focus It does not display the camera view istead of it shows like this
has anybody faced such problem whats the solution?
This way of taking picture (using the Player and VideoControl.getSnapshot()) does not work nice on all BB models. I'd even say it works nice only on a narrow set of BB models. So if you are going to use your app on a wide range of BB models, then this is not the right way to go.
Instead to take a picture use a built-in Camera app. Here is a starting point on how to do that.
Basically you invoke the built-in Camera app and listen for the file-system changes to detect a new image file path. Then you need to close the built-in Camera app somehow - it's possible to do that by simulating two 'Esc' button presses.
Yes, this sounds a bit hacky/over-complicated, but that's how BB engeneers arranged that for us. :) BTW, this is actually not so bad if compare with Android where different device manufactorers violate the common rules and implement the Camera app in their specific way so you are not able to write the code once covering all Androids.