Automated call with Avaya that plays an Audio File - avaya

Now we have an Avaya server , Which works like a charm with Avaya softphones , so since softphones are possible,
Is there anyaway to create / develop an Avaya softphone alternative , that can do an automated call to a specific extension provided from DB query , and play a pre generated text-to-speech audio file (*.wav / *.mp3) , and play the file during the call, by some sort of passing it as a parameter.
I can manage to develop the solution with Java or C# if needed , but i cant find the way , or a class reference to do the required purpose. So any guidelines will be great to achieve my goal.

You can do it using telephony integration.
You can do it in different ways:
- register a DMCC onto the same extension you would like to control and play audio from there
- register DMCC extensions that are conferenced into the call (3 step conf, service observing, single step conference) and play the audio from there
- using IVR channels to do the audio thing and you just conference them in
The keywords you can look up are:
- AES - Avaya Enablement Services - for CTI integration
- DMCC - Device Media Call Control API
- TSAPI/JTAPI for call/device control
Of course you need the appropriate licenses and free capacity to use these features.

Related

getting pcm audio for visualization via Spotify iOS SDK

We're currently looking at taking our music visualization software that's been around for many years to an iOS app that plays music via the new iOS Spotify SDK -- check out http://soundspectrum.com to see our visuals such as G-Force and Aeon.
Anyway, we have the demo projects in the Spotify iOS SDK all up and running and things look good, but the major step forward is to get access to the audio pcm so we can sent it into our visual engines, etc.
Could a Spotify dev or someone in the know kindly suggest what possibilities are available to get a hold of the pcm audio? The audio pcm block can be as simple as a circular buffer of a few thousand of the latest samples (that we would use to FFT, etc).
Thanks in advance!
Subclass SPTCoreAudioController and do one of two things:
Override connectOutputBus:ofNode:toInputBus:ofNode:inGraph:error: and use AudioUnitAddRenderNotify() to add a render callback to destinationNode's audio unit. The callback will be called as the output node is rendered and will give you access to the audio as it's leaving for the speakers. Once you've done that, make sure you call super's implementation for the Spotify iOS SDK's audio pipeline to work correctly.
Override attemptToDeliverAudioFrames:ofCount:streamDescription:. This gives you access to the PCM data as it's produced by the library. However, there's some buffering going on in the default pipeline so the data given in this callback might be up to half a second behind what's going out to the speakers, so I'd recommend using suggestion 1 over this. Call super here to continue with the default pipeline.
Once you have your custom audio controller, initialise an SPTAudioStreamingController with it and you should be good to go.
I actually used suggestion 1 to implement iTunes' visualiser API in my Mac OS X Spotify client that was built with CocoaLibSpotify. It's not working 100% smoothly (I think I'm doing something wrong with runloops and stuff), but it drives G-Force and Whitecap pretty well. You can find the project here, and the visualiser stuff is in VivaCoreAudioController.m. The audio controller class in CocoaLibSpotify and that project is essentially the same as the one in the new iOS SDK.

WebRTC media over IOS

I want to use WebRTC's media layer with proprietary signaling on IOS. Is it possible to use only the WebRTC media layer from the ObjC library that has been released for IOS (libjingle_peerconnection_objc.a)?
yes.
the peer connection object provides all webRTC API, which by default does not include hardware capture, media rendering, and signaling. If you want a complete solution, you will need those 3 pieces.
appRTCDemo code (webrtc.org), provides an implementation of audio and video capturers and renderers leveraging native iOS frameworks that you can reuse out of the box.
You could then just replace the signaling (GAE Channel) by your own. Use the signaling for the original handshake (Offer/answer) and the media/data path setup (ICE candidate exchange) and the webRTC part will be taken care of.
If you want to replace media part only in your proprietary solution you can use VoiceEngine from WebRTC.
It's a part of webrtc's core and peer connection api is built on top of this. You should be aware that at your disposal will be RTP sender/receiver + voice processing. Security layer, NAT traversal, etc should be implemented by yourself.

Is it posible to cast to the receiver the input from the phone microphone?

I would like to know if it is posible to cast the audio taken directly from the microphone iOS device to the receiver. (in a live way)
I´ve downloaded all the git example projects, and in all of them use a "loadMedia" method to start the casting. Example of one of those:
- (NSInteger)loadMedia:(GCKMediaInformation *)mediaInfo
autoplay:(BOOL)autoplay
playPosition:(NSTimeInterval)playPosition;
Can I follow this approach to do what I want? If so, what´s the expected delay?
Thanks a lot
Echo is likely if the device (iOS, Android, or Chrome) is in range of the speakers. That said:
Pick a fast codec that is supported, such as CELT/Opus or Vorbis
I haven't tried either of these, but they should be possible.
Implement your own protocol using CastChannel that passes the binary data. You'll want to do some simple conversion of the stream from Binary to something a bit more friendly. Take a look at Intro to Web Audio for using AudioContext.
or, 2. Setup a trivial server to stream from on your device, then tell the Receiver to just access that local server.

Listening on asterisk active calls from custom IOS app

I am working on an mobile app to listen in on ongoing asterisk calls. Asterisk is set up to record calls, however the inbound and outgoing voices get saved to different wav files. Overcoming first obstacle was to stream wav files while they are being written to - this was achieved using Node JS, however now I need to join the mix two files together and stream them, which would be doable if the files were not written to at the same time.
First option would be to figure out how to programatically join the two while continuously checking if EOF has changed while also streaming the result. (Feels above my paygrade)
Second option would be to stream two files independently to client IOS application which would play them at the same time. If the first challenge of playing two streams simultaneously would be solved, it would require very stable connection. Therefore I don't see this as a viable option
Third possibility would be to embed softphone into the IOS app and use it as a client for ChanSpy. Would that be possible and what library can help me achieve it?
What do you guys suggest, perhaps there are more options out there?
Thanks
What about Using Application_MixMonitor instead?
Why not just build a SIP client on IOS and use ChanSpy to listen to the calls live?
http://www.voip-info.org/wiki/view/Asterisk+cmd+ChanSpy
You can supply m option to mixmon application or use sox to do the mixing.
https://wiki.asterisk.org/wiki/display/AST/Application_Monitor
http://leifmadsen.wordpress.com/tag/mixmonitor-sox-mixing-asterisk-script/

How to fast-forward and rewind audio in j2me / blackberry (midp) application?

I want to fast-forward and rewind recorded audio in a j2me and Blackberry application.
Is there any sample code available? How do I do it?
As a starting point, read the specification of JSR-135: http://www.jcp.org/en/jsr/detail?id=135
Once you have started a Player object, fast-forward and rewind are done by calling the 3 following methods:
Player.stop();
Player.setMediaTime();
Player.start();
When you need to calculate the value of the parameter you need to pass to setMediaTime(), you will probably need to call
Player.getMediaTime();
Once you get all that, check the blackberry documentation to see if there are any differences between the standard J2ME API and the blackberry APIs in that area.

Resources