Is there any way to play the 'Classic' system sounds in iOS? - ios

I found a list of sounds here but I cannot seem to find the system sounds under the 'Classic' section in the sound settings page. Is there anyway to access those sounds?
I tried using /System/Library/Audio/UISounds/alarm.caf and /System/Library/Audio/UISounds/sq_alarm.caf but they sound different from the one I need. Also tried to look for paths under /System/Library/Audio/UISounds/ but could not find any.

You can use AudioToolbox, I think you are looking for 1304 or 1005, that is what I get for "Classic > Alarm" on an iPhone.
import AudioToolbox
func playAlert(){
AudioServicesPlayAlertSound(1304)
}
https://github.com/TUNER88/iOSSystemSoundsLibrary
https://developer.apple.com/documentation/audiotoolbox/1405202-audioservicesplayalertsound

Related

Dynamically including Subtitles to video files in iOS?

In AVFoundation we can split and merge tracks of any media files. I assume a subtitle to be another track and I want to include this track based on the language the user chooses. My Idea is to include the hardcoded subtitles files as per languages I support in the project and add this subtitle track to the video file I have at run time.
I am not sure if this is possible with AVFoundation. Please direct me to a solution.
The apple provided example code "avsubtitleswriterOSX" is compatible with iOS 7 and 8. This solved my issue.
avsubtitleswriterOSX

Using the AVPlayer in swift (xcode 6), how can I implement automatic gain control (AGC) while playing remote files?

I absolutely need to play remote files in an iOS app I'm developing, so using AVPlayer seems to be my only option. (I don't want to download/load files as NSData, then implement the AVAudioPlayer because this doesn't allow the files to start playing immediately.) These remote files sometimes vary greatly in output volume from one to another. How can I implement some form of automatic gain control for the AVPlayer? It's seeming like it's not even possible.
Also: I've explored the MTAudioProcessingTap, but couldn't get it to work using information from the following blogs:
http://venodesigns.net/2014/01/08/recording-live-audio-streams-on-ios/
and
https://chritto.wordpress.com/2013/01/07/processing-avplayers-audio-with-mtaudioprocessingtap/
I'm open to any ideas that involve the AVPlayer. Can it be done? (Thanks in advance - cheers!)

How do I navigate namespaces in MonoTouch / Xamarin for iOS?

This is an example of something that I have experienced a couple of times when working with MonoTouch:
I find a code example on the internet giving an example on how to use the NSUrl class. I try to add it to my code in Xamarin Studio. It is simple, except I can't find which using statement to use. I try to google, but no examples I find include the elusive using statement I am looking for. I find the the official Mac Developer Library NSUrl description, but it does not tell me much (or perhaps it tells me too much).
In general, how do you go about finding which using statement to use in a case like this? Is there a Xamarin documentation of this somewhere that I just can't find? I'm not looking for the specific namespace from the example, but how to go about finding it myself.
If it's an iOS API, it will always be
MonoTouch.<iOS Framework Name>
So if you found NSUrl, you should also be able to see that it is part of Apple's "Foundation" framework.
MonoTouch.Foundation
Of course, I would just recommend just letting the IDE figure it out for you.
Right-click (on NSUrl) -> Resolve -> "using MonoTouch.Foundation"

What is the simplest way to play a MIDI note for an indefinite duration in iOS?

I want to play instrument 49 in iOS for varying pitches [B2-E5] for varying durations.
I have been using the Load Preset Demo as reference. The vibraphone.aupreset does not reference any files. So, I had presumed that:
I would be able to find and change the instrument in the aupreset file (unsuccessful so far)
there is some way to tell the MIDI interface to turn notes on and off without generating *.mid files.
Here's what I did:
Duplicated the audio related code from the proj
removed the trombone related files and code (called loadPresetTwo: in place of loadPresetOne: in init (as opposed to viewDidLoad)),
added a note sequence, and timer to turn off the previous note, and turn on the next note.
Build. Run. I hear sound on the simulator.
There is NO sound coming from my iPhone.
I have triple checked the code that I copied as well as where the calls are taking place. It's all there. The difference is the trombone related files and code are absent. Perhaps there is some dependency that I'm not aware of. Perhaps this is problem rooted in architectural differences between the simulator running on remote Mac VM and the iPhone. Perhaps I can only speculate because I don't know enough about the problem to understand what questions to ask.
Any thoughts or suggested tests would be great!
Thanks.
MusicPlayer + MusicSequence + MusicTrack works. It was much easier than trying to guess what code in the demo was doing.

Switch call to speaker - iOS private api

I'm trying to figure out how I can switch a current call to output audio via the loudspeaker (as if the person was to press "speaker" on the phone app. Ideally, I want to accept the call on loudspeaker; but one step at a time!
I have searched through various headers, both private and non-private frameworks but I can't find the appropriate method to call in order to switch to loudspeaker.
Originally, I expected it would be present in CTCall.h but nothing useful (in this respect) is in there..
Does anyone know where the appropriate method is found?
Many thanks :)
Several ideas:
1) Look at system wide event generation.
You can click programmatically on required button ("speaker" or "Answer" button).
Here are couple of questions regarding this:
Simulating System Wide Touch Events on iOS
Simulating System Wide Touch Events in iOS without jailbreaking the device
How to send a touch event to iPhone OS?
You may be interested to google more on GSEvent which is the key for even simulation.
2) Go to Simulator folder (/Application/XCode/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator6.1.sdk
and do something like this
grep -R "Speaker" ./
The idea is to search speaker in binaries (vs header files). Most of private API's aren't documented in header files (that's part of the reason why they are private).
I believe couple of interesting hits are:
TelephoneUI private framework
AudioToolbox framework (BTW. It even has in .h files following thing: kAudioSessionOverrrideAudioRoute_Speaker)
AVFoundation framewokr
IOKit* framework (it has kIOAudioOutputPortSubTypeExternalSpeaker in .h files)
and so on.
Next step would be to disassemble them. Most of these frameworks has A LOT of interesting API's which aren't defined in .h files. It's useful to browser through them to check whether you will find anything interesting on this subject.
If you don't want to bother with disassembling, you can get extracted headers from here:
https://github.com/nst/iOS-Runtime-Headers/tree/master/PrivateFrameworks

Resources