How fast is UIAccessibilityIsVoiceOverRunning()? - ios

In a project I'm working on, I record usage metrics for various features, and I want to also track how often the features are used in accessibility mode. To that effect, I intend to use the UIAccessibilityIsVoiceOverRunning() function.
What I don't have a handle on, nor is it specified in the documentation, is whether calling this multiple times from multiple places will have an adverse impact on the overall latency of my app. There are a lot of metrics I'd like to add this to, so I worry about the combined effect of such a change. Any ideas?

Before answering, I need to caution:
Be careful not to prematurely optimize; there may be no problem here.
Consider whether you really want that answer to this question. Absolute user numbers for a particular product seldom bolster the case for accessibility. Supporting access is a moral, and sometimes legal, obligation and is not always supported by easily tabulated business metrics.
There is more than one "accessibility mode" on iOS. Measuring VoiceOver use, alone, overlooks many other accessibility tools, and their users, including Dynamic Type, Switch Control, Touch Accommodations, and others.
That said, if by some coincidence UIAccessibilityIsVoiceOverRunning() is too expensive for your particular use case, you could register for VoiceOver status change notifications using UIAccessibilityVoiceOverStatusChanged and cache the value, yourself.

Related

AlphaVantage API Technical Indicators: Do they use only information of the past?

I am writing because I found no public documentation or code to solve this doubt. I have been using the AlphaVantage APIs for a project about stock markets prediction with Machine Learning. I have been using a lot of technical indicators of the AlphaVantage library, and, many of them use sequences (windows) of data points, rolling them (e.g. Moving Averages).
However, many financial libraries tend to update the values they previously computed for some of these indicators, by using windows retaining future information with respect to the point in time the indicator is referred to. Obviously, that would represent an "hidden" information that a predictive system (only relying either on past or present information), like mine, should not have access to.
Hence, I was wondering if it is the same case for the AlphaVantage library. I personally manually checked a lot of indicators referred to the same stock (and I repeated the process for many stocks), at a distance of days, and I did not find any inconsistencies on the values referred to the common dates (the only difference is that the most recent versions of those technical indicators have new points, referred to the new evolutions of the price in time).
I would be very pleased, if anybody of you could help me in solving this.
Most indicators will use a look back window of quote values, including current price, to calculate current indicator values. Many will also include previously calculated indicator values as a basis for current indicator values. Fewer even recalculate older indicator values based on new price information.
For this last scenario, in looking at the AlphaVantage library, I don’t see any in there that would recalculate older indicator values based on newer data. If you’re seeing indicator values change, it’s probably due to a revision or updates of their underlying quote history.
I have a rather large .NET library of indicators, so I’m familiar with which kinds behave that way, due to the mathematics.
Some examples of indicators with retroactive recalculation are ZigZag and Williams Fractal. The reason they do this is because they find local high and low points, which can’t be verified without several confirming bars of data. In other words, you cannot indicate a high point until several lower bars occur thereafter.

Lidars in Drake

I want to simulate lidars. I saw that a class DepthSensor was mentioned in the documentation, but I have not found its actual implementation. For now, I am planning on using the RgbdSensor class and use only the height I need of the depth point cloud I receive to simulate my lidars.
Just to get your input on that, maybe I missed something, but is there a specific class for lidars, and how would you go about adding lidars to a simulation?
Thanks in advance,
Arnaud
You've discovered an anchronism in the code. There had previously been a lidar-like sensor (called DepthSensor). The extant documentation refers to that class. The class's removal should've been accompanied by a clean up of the documentation.
The approach you are taking is the expected approach given Drake's current state.
There has always been an intention to re-introduce a lidar-like sensor in Drake's current architecture. It simply hasn't been a high priority.
I'd recommend you proceed with what you're currently doing (lidar from depth images) but, at the same time, post an issue requesting a lidar-like query with specific focus on the minimum lidar-properties that you require. A discussion regarding how that would differ from what you can actually get from the depth images would better inform of us your unique needs and how to prioritize it. (You can also indicate more advanced features that you need less but would be good to have, of course).
As for the question: how would you go about adding lidars?
That's problematic. Ideally, what you would need is ray-casting ability. The intent is for QueryObject to support such a query, but it hasn't happened yet. (It's certainly the underlying technology we'd have used to implement a LidarSensor.) In the absence of that kind of functionality, you'd essentially have to do it yourself in the most horrible, tedious way imaginable. I'd go so far as to suggest that it's not feasible with the current API.

Does the iOS Speech API support grammar?

I was investigating various Speech Recognition strategies and I liked the idea of grammars as defined in the Web Speech spec. It seems that if you can tell the speech recognition service that you expect “Yes” or “No”, the service could more reliably recognize a “Yes” as “Yes”, “No” as `No”, and hopefully also be able to say “it didn’t sound like either of those!”.
However, in SFSpeechRecognitionRequest, I only see taskHint with values from SFSpeechRecognitionTaskHint of confirmation, dictation, search, and unspecified.
I also see SFSpeechRecognitionRequest.contextualStrings, but it seems to be for a different purpose. I.e., I think I should put brands/trademark type things in there. Putting “Yes” and “No” in wouldn’t make those words any more likely to be selected because they already exist in the system dictionary (this is an assumption I’m making based on the little the documentation says).
Is a way with the API to do something more like grammars or, even more simply, just providing a list of expected phrases so that the speech recognition is more likely to come up with a result I expect instead of similar-sounding gibberish/homophones? Does contextualStrings perhaps increase the likelihood that the system chooses one of those strings instead of just expanding the system dictionary? Or maybe I’m taking the wrong approach and am supposed to enforce grammar on my own and enumerate over SFSpeechRecognitionResult.transcriptions until I find one matching an expected word?
Unfortunately, I can’t test these APIs myself; I am merely researching the viability of writing a native iOS app and do not have the necessary development environment.

What Actionscript features do not work on iOS?

There seems to be a lot of conflicting information out there. It might be that support has increased recently, or changes to adobe.com/air have made some information difficult to find - but I can't track down a definitive list of things to avoid.
I know that actionscript won't run in loaded SWFs, I know that some people say that filters and blendmodes and halo components won't work. I've also read many posts saying they will (at least that blendmodes will, and that halo will run, but slowly so still use spark)
I have a large amount of AS3 code to plan for upgrading to work on iOS, but at the moment I have no idea what things will break (or what things will break when those things have been fixed!)
Is there a list of unsupported APIs, or iOS dos and don'ts?
Thanks
:S
First, yes. Externally loaded SWF's will not run. You can however embed SWF's/SWC's into your project and include them inside of your package.
As far as Flex components, stay away from Halo. You should use Flex 4.6 and stick to components with mobile skins. I recommend downloading Tour de Flex http://www.adobe.com/devnet/flex/tourdeflex.html to get an idea of whats available.
As far as blend modes go, I'm not really sure. I haven't used them in mobile yet. However filters are supported but they are expensive. For drop shadows on rectangles there is something called RectangularDropShadow. This is actually a component and therefor less expensive. However it can only be used on rectangular groups.
You should have access all of the AIR API's. You will however be restricted when using some of File related classes since I don't believe you can leave your Appliaction Storage Directory.
One big performance tip I can give is to use AS3 over MXML whenever possible, ESPECIALLY when creating item renderers. Use BitmapImage over Image whenever possible, again especially in item renderers. Use cacheAsBitmap whenever you have images that don't change often. And stay away from any Flex component that doesn't have a mobile skin.
You may also want to read up on View and destruction policies.
http://www.adobe.com/devnet/flex/articles/flex-mobile-development-tips-tricks-pt1.html
This link also has some more performance tips
http://www.adobe.com/devnet/flex/articles/flex-mobile-performance-checklist.html

Creating PDFs from iOS text fields

I'm working on the requirements & specifications for a new iOS app intended for use by certain professionals working "in the field". All day long for weeks on end, these folks have a sizable reporting burden to their superiors using standardized forms that track all different kinds of information. Traditionally, those forms are in PDF, and are simply printed and filled out in ink and then shared with the dozens to hundreds of others working the same operation. Sometimes they'll use a PDF with form fields so the data can be typed and then printed as part of the form. Either way, given their workflow, time and stress pressures, and other factors, it's not a very productive way to get the standardized reporting forms done.
The app we're spec'ing would offer an iOS (and Android, if possible -- but secondary or even tertiary requirement at this point) user interface for tracking the data they enter in the field, organizing it in a logical manner for each individual user, and with the press of a button, take all that data and automatically create a PDF file of it using the standardized form.
Of course, the forms are STRICTLY and rigidly standardized in this industry, and any deviation in format, structure, or presentation is simply not tolerable.
So I was approaching the project by thinking the app would maintain an internal repository of the original standardized forms from the accrediting organization, with each possible data area defined as a field. The app would:
open the necessary PDF form for the task at hand;
parse its dictionary to identity the specific data fields;
for every single field, identify the relevant data from the iOS app's own user interface and data tables, and assign that data to the corresponding field from the PDF/dictionary
export the PDF to a NEW PDF file, which the app would either email or store through iCloud, Dropbox, or some other form of file sharing.
The catch with #4 is that that PDF file must remain editable by standard PDF applications on Windows and Mac (Acrobat, Preview, etc.), so all the fields need to remain. And the PDF should be viewable just the same on either Windows or Mac.
Now, at NO time will the PDF (neither the original nor the exported final document) EVER need to be displayed inside the iOS app, nor would it make much sense to be able to do so.
I don't know if any of this is possible. This is our first iOS project, and we've been leaning towards building the app using Moai or Corona or some other framework to save development time and make porting across platforms easier. That said, if it cannot be done using Lua and one of these frameworks (I remain skeptical...they seem HIGHLY geared towards games), we're not opposed to doing it directly in Objective C and building an Android version some time down the road.
But either way, I'm at a loss in assessing whether this is even a practical undertaking. Our requirements are clear, and frankly if this can't be done, the project won't be pursued any further. But I could definitely use some help from you folks in identifying what my options are, whether I can do it in Lua, and what SDK(s) would be most useful in accomplishing this.
Based on what you've said, it seems that there is little reason to do the PDF-based part of the work on the mobile device itself since:
you don't need to display it on the ipad
you plan to email it or store it in the cloud
if you write this for iOS you will have to write again for Android as you've mentioned
Can you simplify the mobile part of your requirement by focusing on the data-collection and validation, then firing off to a server to do the document production? That will give you a lot more flexibility in the tools that you can use to merge the data into PDF docs. If so you could look at creating PDFs or populating the fields from code using something like iText (C# or Java). If you don't want to build your own back end server you could try something like Docmosis Cloud - but that might not allow you to get your precise layouts.
Certainly the catch you mentioned - needing to keep the PDFs editable with their fields is a significant gotcha in all cases. If you could convince the stakeholders that it is better to generate the final documents from your system (generate draft, review, update data, generate again etc) - rather than generating editable documents that you then lose control and tracability over, then you will be miles ahead.
Hope that helps.
Did you consider just generating a new pdf using an image of the form as the background to the pdf and just writing the user's data into the required areas over the form image. Would reduce the complexity of trying to parse the original form PDFs.
That's a point of worthwhile discussion, but one we don't have an ideal answer on. I tend to think of that as the almost perfect scenario -- it'd be considerably easier to develop. There are two key issues with this approach that have made us table it except as a very last resort:
The users of this product would be working in the field. That field could be quite literally anywhere--the streets of Manhattan, a disaster-stricken area with infrastructure that's been severely damaged or even destroyed, or the most war-ravaged third world country. If it were the streets of, say, Manhattan, there's no problem--their iOS or Android device will have 3G or Wi-Fi access just about anywhere they go. In the latter two scenarios (which are arguably more common in this industry), that connectivity may be very limited. The concern is whether the end user's ability to be productive or to see and share data with their colleagues will be too greatly restricted if they don't have a decent signal. To be fair though, even today they often aren't even using mobile devices, forcing them to go back to a headquarters type location or use radios to share information, effectively negating my point here. But if we're not going to significantly increase their productivity in the field, it just gives us pause to think through whether or not we have enough of a value proposition to ask them to fairly significantly change their methods of doing things.
To your latter point, no there's no convincing the stakeholders that this new system is the better approach. Even if there were, it would take years to do so. These forms are a part of a well-defined, decades-old standard used by literally thousands of organizations.

Resources