I am automating Snapchat application where I have to deal with some video which comes with sound. I have to mute and unmute Media volume to view Video with and without Sound. Right now I am using below piece of code to mute and unmute but it is handing the System Ringtone not System Media.
driver.pressKey(new KeyEvent(AndroidKey.VOLUME_MUTE));
driver.pressKey(new KeyEvent(AndroidKey.VOLUME_UP));
How to control Media Volume in Android using Appium?
Key presses on appium are done by
driver.press_keycode(code)
All required codes can be found here
24 - KEYCODE_VOLUME_UP
25 - KEYCODE_VOLUME_DOWN
So in your case,
final code would be like,
driver.press_keycode(24) - for volume up
driver.press_keycode(25) - for volume down
Related
I am currently working on a VoIP app using PJSUA library.
Our app is a PTT (push-to-talk) app and supports full duplex.
As such, we need to change sound device constantly between speaker only and full duplex. For that we use the pjsua_set_snd_dev2 method, using default capture and playback devices, and alternating the mode between speaker only and speaker+mic.
Whenever I enable echo cancellation I experience a significant decrease in speaker volume whenever using the PJSUA_SND_DEV_SPEAKER_ONLY mode.
This decrease doesn't change even when I cancel echo cancellation using pjsua_set_ec.
I configure the AudioSession with playAndRecord category, default mode and with defaultToSpeaker and duckOthers category option.
Any ideas anyone?
While making a broadcast with iOS SDK, everything is working fine and viewers can listen to what broadcaster is saying.
But, after sometime when I am starting a mp3 song from broadcaster mobile library, viewer should be able to listen what broadcaster is saying as well as song which is being played, but no audio is getting send to viewer once I start the song.
Ideally, broadcaster’s audio should mix with song and viewer should be able to listen to both.
I'm using below codes to play the audio song.
Please let me know what should I do as I want to play audio file in background while I am doing streaming.
Yes, it's an expected behavior. I mean when playing the mp3 file, you're setting "shared audio session instance" in iOS mode to playback and it closes the microphone.
try
AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category(rawValue: AVAudioSession.Category.playback.rawValue))
After that you need to change it to record mode again to let it capture the audio from microphone.
The category should be set to playandrecord and mode can be set to voicechat.
Is there a way create a virtual audio output device that would make it show up in the Music app's or Spotify's output options? Alternatively, is there a way to intercept the audio stream and then force audio output to something unused (say, open headphone port)?
What I want to do is take the raw audio stream, encode/compress it via a codec, and then send over BLE (not BT Classic). Ideally my "device", or service, would show up in the output options of the music/Spotify apps
I work on a VoIP app. The AudioSession's mode is set to kAudioSessionMode_VoiceChat.
For a call, I open a CoreAudio AudioUnit with subtype kAudioUnitSubType_VoiceProcessingIO . Everything works fine. After the first call, I close the AudioUnit with AudioUnitUninitialize() and I deactivate the audio session.
Now, however, it seems as if the audio device is not correctly released: the ringer volume is very low, the media player's volume is lower than usual. And for a subsequent call, I cannot activate kAudioUnitSubType_VoiceProcessingIO anymore. It works to create an AudioUnit with kAudioUnitSubType_RemoteIO instead, but also the call's volume is very low (both receiver and speaker).
This first occured on iOS 5. With the iPhone 5 on iOS 6, it is even worse (even lower volume).
Has anyone seen this? Do I need to do more than AudioUnitUninitialize() to release the Voice Processing unit?
I've found the solution: I've incorrectly used AudioUnitUninitialize() to free the audio component retrieved with AudioComponentInstanceNew(). Correct is to use AudioComponentInstanceDispose().
Yes, you need to dispose the audioUnit when using voiceProcessingIO. For some reason there is no problem when using RemoteIO subtype. So whenever you get OSStatus -66635 (kAudioQueueErr_MultipleVoiceProcessors), check for missing AudioComponentInstanceDispose() calls.
My project involves a magnetic card reader device that plugs into the phono socket (ie only uses microphone)
Can I get my project to output sound through the inbuilt speaker while simultaneously listening for input from the device?
Research suggests this is not possible:
iPhone audio playback: force through internal speaker?
Force iPhone to output through the speaker, while recording from headphone mic
Audio Session Services: kAudioSessionProperty_OverrideAudioRoute with different routes for input & output
The only way round I can see is actually changing the audio session every time I wish to emit a sound.
Is this really the only option? And is it practical to do this? How long would it take for the audio session to reconfigure itself?