iOS data over data headphone port - ios

Hardware accessories like the Square Card Reader use the audio (headphone) port to transfer data.
What API in the iOS SDK accomplish this?

If you look on your official Apple headphones, there's 3 rings around the connector tip. Usually headphones only have two (I assume, one for left channel, other for right channel). The 3rd ring is meant for the waveforms being sent from the volume controls on the headphones.
If you custom built your own hardware, you can send waveforms over that 3rd ring tip using Core Audio and have your app respond to any controls on the hardware.

Related

Is it technically possible to intercept iOS audio to add DSP sound filtering?

I want to create an iPhone app that adds my custom sound processing to the current audio (like equalizing, reverb, dynamic compression and spatial processing). I have good audio processing knowledge, but I can't find how to intercept audio from an iOS device to apply my filtering.
I am looking for a solution that would work with any source (any app) and any output (iPhone build-in speaker, wired headphones, bluetooth headset).
What I have explored so far:
1) Audio Units / Inter-App Audio. Unfortunately AU/IAA in only supported by limited music apps and I have not found a way to capture general audio.
2) Virtual AirPlay device. Unfortunately selecting another output device would sink the sound out of the iPhone and the user would not be able to hear it anymore.
3) ReplayKit API. Unfortunately it seems incompatible with any AVPlayer media content.

iOS 7+ Is there a possibility to capture video from frontal camera while showing another video on the screen?

I have a task.
There is iOS device. There is an app I should create.
The app shows some video file (local video file from the device) while frontal camera captures users' face.
Showing video and capturing user's face via frontal camera are simultaneous.
I see that FaceTime and Skype for iOS can do this. But the former one created by Apple (they can do whatever on their devices) while latter one is owned by Microsoft (big companies/big money sometimes allowed more than usual developers).
Moreover, I doubt on co-existense of video capturing along with video player at the same time.
So, I am not sure that this task is 100% implement-able and publish-able.
Is it possible on iOS 7+?
Is it allowed by Apple to do this (I mean that there are many technical possibilities on iOS but only some of them are OK for Apple. Especially during moderation process)?
Are there good technical references?
I believe so. Doing a search on Appstore shows a number of video conferencing apps:
Zoom cloud
Polycom
VidyoMobile
Fuze
Just search for "video conferencing".

Is it possible to record sound via the USB Camera Kit in objective c?

I would like to build a straight forward app that can intercept the usb sound input via the ipod/ipad usb camera kit. Is this possible and if so what area within Core Audio should I look at?
Thanks for your help and any help is helpful!!!
iOS automatically reroutes microphone input from suitable generic USB audio input devices using the camera connection kit to all iOS audio APIs.

Play sound when silence in the room; stop sound when voices heard

I need some guidance as I may have to shelve development until a later time.
I want to play a sound once the lights are switched off and the room goes dark, then stop the sound once the light is switched back on. I've discovered that Apple doesn't currently provide a way to access the ambient light sensor (not in any way that will get App Store approval).
The alternative I've been working on is to try and detect sound levels (using AVAudioPlayer/Recorder and example code from http://mobileorchard.com/tutorial-detecting-when-a-user-blows-into-the-mic/. I.e., when I detect voices of people in the room have dropped to a specific level (i.e. silence trying to compensate for background noise), I play my sounds.
However, if the people in the room start talking again and I detect the voices, I need to stop playing the sounds.
Q: is this self-defeating, i.e., the sound generated by the iPhone will essentially be picked up by the iPhone microphone and indistinguishable from any voices in the room? Methinks yes and unless there's an alternative approach to this, I'm at an impasse until light sensor API is opened up by Apple.
I don't think the noise made by the iPhone speaker will be picked up by the mic. The phone cancels sounds generated by the speaker. I read this once, and if I find the source I'll post it. Empirically, though, you can tell this is the case when you use speaker phone. If the mic picked up sound from the speaker that's an inch away from it, the feedback would be terrible.
Having said that, the only sure way to see if it will work for your situation is to try it out.
I agree with woz: the phone should cancel the sound it's emitting. About the ambient light sensor, the only alternative I see is using the camera, but it would be very energy inefficient, and would require the app to be launched.

Recording the screen of an iPad 2

First of all: This question is not directly programming related. However, the problem only exists for developers, so I'm trying to find an answer here anyways since there are maybe other people on this community who already solved the problem.
I want to record the screen of the iPad 2 to be able to create demo videos of an app.
Since I'm using motion data, I cannot use the simulator to create the video and have to use the actual iPad itself.
I've seen various websites where different methods were discussed.
iPad 2 <==> Apple Digital AV Adapter <==> Blackmagic Design Intensity Pro <==> Playback software <==> TechSmith Camtasia screen recorder on the playback software to circumvent the HDCP flag
iPad 2 <==> Apple VGA Adapter <==> VGA2USB <==> Recording software
...
Everyone seems to have his own hacky solution to this problem.
My setup is the following:
iPad 2 (without Jailbreak)
Apple Mac mini with Lion Server
PC with non-HDCP compliant main board
Non-HDCP compliant displays
It doesn't matter whether the recording has to be on the mac or on the PC.
My questions:
Is it possible to disable the HDCP flag programmatically from within the application itself?
HDMI offers a better quality than VGA. Will the first method I've listed work with my setup although I don't have a full HDCP chain?
What about the Intensity Extreme box? Can I use it and then connect to the Thunderbolt port of the mac mini and record from there?
Is the Thunderbolt port of the mac mini bidirectional and is also suited for capturing? Is the mac mini HDCP compliant? If it does not work due to my screens not being HDCP compliant, will it work if I start the recording software, then disconnect the screens? Will it work if I use an iPad 2 over VNC as a screen since it has to be HDCP compliant if it sends HDCP streams?
If I have to fall back to the VGA solution: Will the VGA adapter mirror everything what's showing on the iPad 2 screen or do I have to program a special presentation mode which sends everything to the VGA cable instead of the iPad screen? Will the proposed setup using VGA2USB be qualitatively high or would you recommend other tools?
What about the Apple Composite AV Cable? Maybe another approach?
I decided to use the Blackmagic Design Intensity Pro together with the Apple Digital AV adapter on a machine with a working HDCP chain.
This approach worked well, capturing is possible in 720p with the iPad's screen centered within the captured video. Therefore, capturing happens not at the full native resolution of the iPad's screen, and black borders are introduced to fill the 720p video frames.
I posted info about displaying HDMI HDTV output to a non-HDCP monitor. Look for my posts. Perhaps it will be of use. Or, how about just using a cell phone to record a video of your tablet's screen, with proper lighting, low reflectance, etc. Won't be 100% but might be sufficient.

Resources