In android you are able to interface with the native telecom system.
The Android framework includes the android.telecom package, which contains classes that help you build a calling app according to the telecom framework. Building your app according to the telecom framework provides the following benefits:
Your app interoperates correctly with the native telecom subsystem in the device.
Your app interoperates correctly with other calling apps that also adhere to the framework.
The framework helps your app manage audio and video routing.
The framework helps your app determine whether its calls have focus.
What I personally care about is like accepting and rejecting incoming calls, routing the audio streams of incoming calls, and know when they hung up. The closest thing I have found is IOS' Callkit but it doesn't seem to be as feature rich as Android's telecom library and lacks the features I need.
So far I know I am able to:
Reject calls (by dissallowing the call in reportNewIncomingCall)
Find out when a call has finished (by using the callObserver)
I am still looking for:
Accept calls
Stream audio to my incoming calls
Which library should I use for these two things, are both even possible in IOS?
My use case
I basically want the ability accept a call and then instead of immediately connecting to the user, I want to play an audio stream first, before the user gets notified.
Related
I found various threads here about how muting or canceling incoming calls (or messages) with the iOS SDK is not possible, due to the fact that Apple doesn't want an app to access system level settings. Well in fact not possible with the official tools, which means that if you somehow manage to do it, your app will not be accepted in the iTunes store.
Well I have been asked to assess the possibility of such an app that could do just that. Namely my client has seen these two apps
https://itunes.apple.com/us/app/lifesaver-distracted-driving/id874231222?mt=8
https://itunes.apple.com/us/app/at-t-drivemode/id907208943?mt=8
And they are sure that an app, basically exactly like these (based on the functionality) can be made.
So here I am, asking, how did these two apps succeed at the impossible and also how did they manage to get those apps uploaded to the iTunes store, if muting your phone is not an Apple approved option? I am not really asking for source code, although I am certainly not rejecting examples, but moreso I am asking for pointers of what class or book or documentation do I have to look up to figure out if this is possible? Apples CTCall and CT* classes did not seem to help me much.
K
Apple added the CallKit framework in iOS 10 to allow app developers to do this sort of thing, among others. For docs, see:
https://developer.apple.com/reference/callkit
It is now possible to detect and block unwanted phone calls from iOS 10 and above.
See the CallKit framework
The CallKit framework (CallKit.framework) lets VoIP apps integrate
with the iPhone UI and give users a great experience. Use this
framework to let users view and answer incoming VoIP calls on the lock
screen and manage contacts from VoIP calls in the Phone app’s
Favorites and Recents views.
CallKit also introduces app extensions that enable call blocking and
caller identification. You can create an app extension that can
associate a phone number with a name or tell the system when a number
should be blocked.
I'm trying to get into a new project, by creating an iOS application. But before I start I would like to understand some points:
is it possible to let an application make a phone call? So what I mean is, assumed we have a phone number and would like to call it. Would it be possible to use an (my) application to call this number?
is it possible to let an application speak during a phone call? So after the application started the call, would it be possible that some predefined statements are said in the call?
is it possible that this application hears, registers and analyses what the other person on the phone line is saying? (Leaving apart the privacy issue, assuming that the other person is willing to do that).
Could you please help me? If my question aren't clear, please tell me, I will try to explain it in another way.
Many Thanks
F.P.
iOS is very restricted in terms of the system behaviors third party applications can influence.
To answer your question bluntly, a third party application could prompt the user to initiate a phone / FaceTime call. Once the call is initiated however, your app would enter a background state and relinquish control to the system. The app would not be able to contribute or read any data related to the system phone / FaceTime call.
iOS 10 introduces a VoIP extension, CallKit, which allows third party apps to use the built in calling UI with a custom protocol. You could implement your own protocol (and host servers for handling the exchange of information) and build an extension to make it feel like a system call. You'd be responsible for all aspects of the custom call protocol and thus reading voices, contributing audio, etc. would all be possible (and up to your implementation).
Outside of iOS 10, you would have to built your own VoIP system and interface entirely from scratch.
For more info on CallKit:
WWDC Enhancing VoIP Apps with CallKit
CallKit Enabled Sample App
I want to create a project which will interact with the iPhone/iPad via the 3.5mm jack. There are a bunch of these accessories on kickstarter.com. Although, I could't find any SDK that will provide me the possibility to get data from jack input.
I have seen that some people at progical.com have a sort of SDK that can manage this kind of connection but they haven't answered me yet (I applied for their SDK few months ago). Is there any alternative for that? I want to make this project in order to get my degree so the Apple MFI won't apply.
The project will consist in a bunch of sensors that will send data to my app using 3.5 audio jack. Using my app, I will process the received data.
The 3.5" Jack connector is originally supposed to send and receive audio data. It means that if a connection is plugged in, the OS will automatically redirect all audio signals to it (with a few exceptions). Thus, you can access the data using the built-in audio processing APIs of iOS, for example CoreAudio and audio queues.
As an example, you can generate and receive signals of different frequencies, which can be used to control and get information from external devices (you'll need some kind of electrical engineering for this to work, though - filtering, separating control frequencies, etc.).
Is there any supported way of displaying the standard "TV Connected" UI with MPMoviePlayerViewController when an external display is connected? I may be missing something obvious but there doesn't seem to be any documentation about it, and there's nothing in the API that points towards enabling this functionality.
Our app already supports TV Out using some (rather nasty) custom code, but this started acting strangely with the release of iOS 6. We patched the existing release and sent it out to testers, but one of them began complaining that the UI looked different, and to my surprise he sent screenshots of the old app on iOS 5 using the normal "TV Connected" UI, which (as far as I knew) wasn't available to third party developers.
I know there are examples of using a UIWebView to achieve this functionality, but that's not suitable for us because we need to respond to notifications from the movie player in order to report playback state & progress to our server software. The standard UI also handles certain situations (e.g. AirPlay mirroring, which we can't detect with public APIs) more elegantly than we've managed to achieve with our custom code.
I would like to build a simple reader app for the iPad 2 that would allow users to navigate/read via voice controls. The app would allow the user to enter a mode where the microphone was live and listened for predefined keywords like 'down', 'up', 'next', 'back', 'home', etc.
I don't want to reinvent the wheel on this so I'm just wondering first, if someone has done this already and if not, are there any good tutorials or SDKs available to help with recording someone's voice, and then comparing future output to see if it matches, or just dealing with the microphone in general?
Let's put aside that this is a fairly vaguely worded question for the moment.
If you are expecting to allow voice control in your app that somehow works throughout the entire device, it's just not possible. Your app would only work to control itself -- or at least itself and whatever external hooks you can normally get to the rest of the device, like, say, playing a song out of the user's iTunes library.
If you're planning on doing this in a jailbroken environment, then you should find some open-source library that does voice recognition -- if there are any -- and start from there. Be prepared for a very long haul, though.
Dragon Mobile SDK is what you're looking for.
http://dragonmobile.nuancemobiledeveloper.com/
There maybe others voice recognition SDKs out there, but this is the only one I can think of from the top of my head.
You can find a library called CMU Sphinx. There's an iphone version for it called
PocketSphinx. See if it fits your needs.
I would like to build a simple reader app for the iPad 2 that would allow users to navigate/read via voice controls.
The iOS 13 new feature Voice Control fully meets your request because you can control your device and your app with your voice exactly the same as with touches.
It's also possible to define actions for some specific words for instance.
The device settings are perfectly well detailed to handle this amazing new feature (Accessibility - Voice Control):
If you need dedicated names to be read out in your app, use the accessibilityUserInputLabels property to define them.
That's definitely the built-in tool your need to reach your goal: no need to use external library or SDK, everything is natively provided. ;o)