I had the problem to check if I am connected to an airplay device and if it is connected via mirroring or streaming. But the check needs to be done before the video started.
airPlayVideoActive only return YES if the video already started.
This is my solution
- (BOOL)isAudioSessionUsingAirplayOutputRoute
{
/**
* I found no other way to check if there is a connection to an airplay device
* airPlayVideoActive is NO as long as the video hasn't started
* and this method is true as soon as the device is connected to an airplay device
*/
AVAudioSession* audioSession = [AVAudioSession sharedInstance];
AVAudioSessionRouteDescription* currentRoute = audioSession.currentRoute;
for (AVAudioSessionPortDescription* outputPort in currentRoute.outputs){
if ([outputPort.portType isEqualToString:AVAudioSessionPortAirPlay])
return YES;
}
return NO;
}
To check if the airplay connection is mirroring you just have to check the screens count.
if ([[UIScreen screens] count] < 2)) {
//streaming
}
else {
//mirroring
}
If there is a better solution, let me know
Swift version:
var isAudioSessionUsingAirplayOutputRoute: Bool {
let audioSession = AVAudioSession.sharedInstance()
let currentRoute = audioSession.currentRoute
for outputPort in currentRoute.outputs {
if outputPort.portType == AVAudioSessionPortAirPlay {
return true
}
}
return false
}
And checking the screen count:
if UIScreen.screens.count < 2 {
//streaming
} else {
//mirroring
}
If you are using AVPlayer, it has property isExternalPlaybackActive which can help you
For the poor souls on Objc
[[NSNotificationCenter defaultCenter]
addObserver:self
selector: #selector(deviceChanged:)
name:AVAudioSessionRouteChangeNotification
object:[AVAudioSession sharedInstance]];
- (void)deviceChanged:(NSNotification *)sender {
NSLog(#"Enters here when connect or disconnect from Airplay");
}
Related
I'm calling a function to attempt to turn on my device's flash:
private func flashOn(device:AVCaptureDevice)
{
print("flashOn called");
do {
try device.lockForConfiguration()
// line below returns warning 'flashMode' was deprecated in iOS 10.0: Use AVCapturePhotoSettings.flashMode instead.
device.flashMode = AVCaptureDevice.FlashMode.auto
device.unlockForConfiguration()
} catch {
// handle error
print("flash on error");
}
}
Setting device.flashMode to AVCaptureDevice.FlashMode.auto brings up the warning "'flashMode' was deprecated in iOS 10.0: Use AVCapturePhotoSettings.flashMode instead.". Even though it is just a warning, it does not enable the flash when testing my app, so I change that line to:
device.flashMode = AVCaptureDevice.FlashMode.auto
So I set the line to this, like it suggests:
AVCapturePhotoSettings.flashMode = AVCaptureDevice.FlashMode.auto
And I get the error "Instance member 'flashMode' cannot be used on type 'AVCapturePhotoSettings'"
So I have no idea how to set the flash in Xcode version 9 using Swift 4.0. All the answers I've found in Stack Overflow have been for previous versions.
I've been facing the same problem. Unfortunately, many useful methods got deprecated in iOS10 and 11. Here is how I managed to resolve it:
AVCapturePhotoSettings object is unique and cannot be reused, so you need to get new settings every time using this method:
/// the current flash mode
private var flashMode: AVCaptureDevice.FlashMode = .auto
/// Get settings
///
/// - Parameters:
/// - camera: the camera
/// - flashMode: the current flash mode
/// - Returns: AVCapturePhotoSettings
private func getSettings(camera: AVCaptureDevice, flashMode: AVCaptureDevice.FlashMode) -> AVCapturePhotoSettings {
let settings = AVCapturePhotoSettings()
if camera.hasFlash {
settings.flashMode = flashMode
}
return settings
}
As you can see, lockConfiguration is not needed.
Then simply use it while capturing photo:
#IBAction func captureButtonPressed(_ sender: UIButton) {
let settings = getSettings(camera: camera, flashMode: flashMode)
photoOutput.capturePhoto(with: settings, delegate: self)
}
Hope it will help.
For objective C:
- (IBAction)turnTorchOn: (UIButton *) sender {
sender.selected = !sender.selected;
BOOL on;
if (sender.selected) {
on = true;
} else {
on = false;
}
// check if flashlight available
Class captureDeviceClass = NSClassFromString(#"AVCaptureDevice");
if (captureDeviceClass != nil) {
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCapturePhotoSettings *photosettings = [AVCapturePhotoSettings photoSettings];
if ([device hasTorch] && [device hasFlash]) {
[device lockForConfiguration:nil];
if (on) {
[device setTorchMode:AVCaptureTorchModeOn];
photosettings.flashMode = AVCaptureFlashModeOn;
//torchIsOn = YES; //define as a variable/property if you need to know status
} else {
[device setTorchMode:AVCaptureTorchModeOff];
photosettings.flashMode = AVCaptureFlashModeOn;
//torchIsOn = NO;
}
[device unlockForConfiguration];
}
}
}
I have created an iOS app where I can take videos and photos and save them to my gallery on my iPhone. My problem is that I want to keep recording while taking a video before saving. It is an option recently added to WhatsApp or other apps, but I can't figure out how to do it.
-(BOOL)toggleCamera
{
BOOL success=NO;
if (self.cameraCount>1)
{
NSError* error;
AVCaptureDeviceInput* newVedioInput;
AVCaptureDevicePosition position=[self.videoInput.device position];
if (position==AVCaptureDevicePositionBack)
{
newVedioInput=[[AVCaptureDeviceInput alloc] initWithDevice:self.frontFacingCamera error:&error];
}
else if (position==AVCaptureDevicePositionFront)
{
newVedioInput=[[AVCaptureDeviceInput alloc] initWithDevice:self.backFacingCamera error:&error];
}
else
{
return NO;
}
if (newVedioInput!=nil)
{
[self.session beginConfiguration];
[self.session removeInput:self.videoInput];
if ([self.session canAddInput:newVedioInput])
{
[self.session addInput:newVedioInput];
self.videoInput=newVedioInput;
}
else
{
[self.session addInput:self.videoInput];
}
[self.session commitConfiguration];
success=YES;
}
else if (error)
{
if ([self.delegate respondsToSelector:#selector(recordManager:didFailWithError:)])
{
[self.delegate recordManager:self didFailWithError:error];
}
}
}
return success;
}
When I do this, the session changes and the video is saved before flipping the camera and that is not what I need. Can anyone help?
You don't need to change the output file.
You need to reconfigure the AVCaptureDeviceInput input of the AVCaptureSession by enclosing the configuration code in beginConfiguration() and commitConfiguration() method.
Dispatch the below work to session's private queue using it's sync
method. eg: sessionQueue.sync{}
self.session.beginConfiguration()
if let videoInput = self.videoInput {
self.session.removeInput(videoInput)
}
addVideoInput() //adding `AVCaptureDeviceInput` again
configureSessionQuality() //Configuring session preset here.
// Fix initial frame having incorrect orientation
let connection = self.videoOutput?.connection(with: .video)
if connection?.isVideoOrientationSupported == true {
connection?.videoOrientation = self.orientation
}
//Fix preview mirroring
if self.cameraLocation == .front {
if connection?.isVideoMirroringSupported == true {
connection?.isVideoMirrored = true
}
}
self.session.commitConfiguration()
//Change zoom scale if needed.
do {
try self.captureDevice?.lockForConfiguration()
self.captureDevice?.videoZoomFactor = zoomScale
self.captureDevice?.unlockForConfiguration()
} catch {
print(error)
}
I am using the above approach in my production code. It is working
perfectly fine
WebRTC video by default uses Front Camera, which works fine. However, i need to switch it to back camera, and i have not been able to find any code to do that.
Which part do i need to edit?
Is it the localView or localVideoTrack or capturer?
Swift 3.0
Peer connection can have only one 'RTCVideoTrack' for sending video stream.
At first, for change camera front/back you must remove current video track on peer connection.
After then, you create new 'RTCVideoTrack' on camera which you need, and set this for peer connection.
I used this methods.
func swapCameraToFront() {
let localStream: RTCMediaStream? = peerConnection?.localStreams.first as? RTCMediaStream
localStream?.removeVideoTrack(localStream?.videoTracks.first as! RTCVideoTrack)
let localVideoTrack: RTCVideoTrack? = createLocalVideoTrack()
if localVideoTrack != nil {
localStream?.addVideoTrack(localVideoTrack)
delegate?.appClient(self, didReceiveLocalVideoTrack: localVideoTrack!)
}
peerConnection?.remove(localStream)
peerConnection?.add(localStream)
}
func swapCameraToBack() {
let localStream: RTCMediaStream? = peerConnection?.localStreams.first as? RTCMediaStream
localStream?.removeVideoTrack(localStream?.videoTracks.first as! RTCVideoTrack)
let localVideoTrack: RTCVideoTrack? = createLocalVideoTrackBackCamera()
if localVideoTrack != nil {
localStream?.addVideoTrack(localVideoTrack)
delegate?.appClient(self, didReceiveLocalVideoTrack: localVideoTrack!)
}
peerConnection?.remove(localStream)
peerConnection?.add(localStream)
}
As of now I only have the answer in Objective C language in regard to Ankit's comment below. I will convert it into Swift after some time.
You can check the below code
- (RTCVideoTrack *)createLocalVideoTrack {
RTCVideoTrack *localVideoTrack = nil;
NSString *cameraID = nil;
for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if (captureDevice.position == AVCaptureDevicePositionFront) {
cameraID = [captureDevice localizedName]; break;
}
}
RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:cameraID];
RTCMediaConstraints *mediaConstraints = [self defaultMediaStreamConstraints];
RTCVideoSource *videoSource = [_factory videoSourceWithCapturer:capturer constraints:mediaConstraints];
localVideoTrack = [_factory videoTrackWithID:#"ARDAMSv0" source:videoSource];
return localVideoTrack;
}
- (RTCVideoTrack *)createLocalVideoTrackBackCamera {
RTCVideoTrack *localVideoTrack = nil;
//AVCaptureDevicePositionFront
NSString *cameraID = nil;
for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if (captureDevice.position == AVCaptureDevicePositionBack) {
cameraID = [captureDevice localizedName];
break;
}
}
RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:cameraID];
RTCMediaConstraints *mediaConstraints = [self defaultMediaStreamConstraints];
RTCVideoSource *videoSource = [_factory videoSourceWithCapturer:capturer constraints:mediaConstraints];
localVideoTrack = [_factory videoTrackWithID:#"ARDAMSv0" source:videoSource];
return localVideoTrack;
}
If you decide to use official Google build here the explanation:
First, you must configure your camera before call start, best place to do that in ARDVideoCallViewDelegate in method didCreateLocalCapturer
- (void)startCapture:(void (^)(BOOL succeeded))completionHandler {
AVCaptureDevicePosition position = _usingFrontCamera ? AVCaptureDevicePositionFront : AVCaptureDevicePositionBack;
__weak AVCaptureDevice *device = [self findDeviceForPosition:position];
if ([device lockForConfiguration:nil]) {
if ([device isFocusPointOfInterestSupported]) {
[device setFocusModeLockedWithLensPosition:0.9 completionHandler: nil];
}
}
AVCaptureDeviceFormat *format = [self selectFormatForDevice:device];
if (format == nil) {
RTCLogError(#"No valid formats for device %#", device);
NSAssert(NO, #"");
return;
}
NSInteger fps = [self selectFpsForFormat:format];
[_capturer startCaptureWithDevice: device
format: format
fps:fps completionHandler:^(NSError * error) {
NSLog(#"%#",error);
if (error == nil) {
completionHandler(true);
}
}];
}
Don't forget enabling capture device is asynchronous, sometime better to use completion to be sure everything done as expected.
I am not sure which chrome version you are using for webrtc but with v54 and above there is "bool" property called "useBackCamera" in RTCAVFoundationVideoSource class. You can make use of this property to switch between front/back camera.
Swift 4.0 & 'GoogleWebRTC' : '1.1.20913'
RTCAVFoundationVideoSource class has a property named useBackCamera that can be used for switching the camera used.
#interface RTCAVFoundationVideoSource : RTCVideoSource
- (instancetype)init NS_UNAVAILABLE;
/**
* Calling this function will cause frames to be scaled down to the
* requested resolution. Also, frames will be cropped to match the
* requested aspect ratio, and frames will be dropped to match the
* requested fps. The requested aspect ratio is orientation agnostic and
* will be adjusted to maintain the input orientation, so it doesn't
* matter if e.g. 1280x720 or 720x1280 is requested.
*/
- (void)adaptOutputFormatToWidth:(int)width height:(int)height fps:(int)fps;
/** Returns whether rear-facing camera is available for use. */
#property(nonatomic, readonly) BOOL canUseBackCamera;
/** Switches the camera being used (either front or back). */
#property(nonatomic, assign) BOOL useBackCamera;
/** Returns the active capture session. */
#property(nonatomic, readonly) AVCaptureSession *captureSession;
Below is the implementation for switching camera.
var useBackCamera: Bool = false
func switchCamera() {
useBackCamera = !useBackCamera
self.switchCamera(useBackCamera: useBackCamera)
}
private func switchCamera(useBackCamera: Bool) -> Void {
let localStream = peerConnection?.localStreams.first
if let videoTrack = localStream?.videoTracks.first {
localStream?.removeVideoTrack(videoTrack)
}
let localVideoTrack = createLocalVideoTrack(useBackCamera: useBackCamera)
localStream?.addVideoTrack(localVideoTrack)
self.delegate?.webRTCClientDidAddLocal(videoTrack: localVideoTrack)
if let ls = localStream {
peerConnection?.remove(ls)
peerConnection?.add(ls)
}
}
func createLocalVideoTrack(useBackCamera: Bool) -> RTCVideoTrack {
let videoSource = self.factory.avFoundationVideoSource(with: self.constraints)
videoSource.useBackCamera = useBackCamera
let videoTrack = self.factory.videoTrack(with: videoSource, trackId: "video")
return videoTrack
}
In the current version of WebRTC, RTCAVFoundationVideoSource has been deprecated and replaced with a
generic RTCVideoSource combined with an RTCVideoCapturer implementation.
In order to switch the camera I'm doing this:
- (void)switchCameraToPosition:(AVCaptureDevicePosition)position completionHandler:(void (^)(void))completionHandler {
if (self.cameraPosition != position) {
RTCMediaStream *localStream = self.peerConnection.localStreams.firstObject;
[localStream removeVideoTrack:self.localVideoTrack];
//[self.peerConnection removeStream:localStream];
self.localVideoTrack = [self createVideoTrack];
[self startCaptureLocalVideoWithPosition:position completionHandler:^{
[localStream addVideoTrack:self.localVideoTrack];
//[self.peerConnection addStream:localStream];
if (completionHandler) {
completionHandler();
}
}];
self.cameraPosition = position;
}
}
Take a look at the commented lines, If you start adding/removing the stream from the peer connection it will cause a delay in the video connection.
I'm using GoogleWebRTC-1.1.25102
I am working on a project which can play music via HFP device. But here's a problem that I want to detect whether an HFP or A2DP is connected when music is playing.
Now I am using the AVFoundation framework to do this. Here's the code:
- (BOOL)isConnectedToBluetoothPeripheral
{
BOOL isMatch = NO;
NSString* categoryString = [AVAudioSession sharedInstance].category;
AVAudioSessionCategoryOptions categoryOptions = [AVAudioSession sharedInstance].categoryOptions;
if ((![categoryString isEqualToString:AVAudioSessionCategoryPlayAndRecord] &&
![categoryString isEqualToString:AVAudioSessionCategoryRecord]) ||
categoryOptions != AVAudioSessionCategoryOptionAllowBluetooth)
{
NSError * error = nil;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord
withOptions:AVAudioSessionCategoryOptionAllowBluetooth
error:&error];
if (error) {
[[AVAudioSession sharedInstance] setCategory:categoryString
withOptions:categoryOptions
error:&error];
return isMatch;
}
}
NSArray * availableInputs = [AVAudioSession sharedInstance].availableInputs;
for (AVAudioSessionPortDescription *desc in availableInputs)
{
if ([[desc portType] isEqualToString:AVAudioSessionPortBluetoothA2DP] || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothHFP])
{
isMatch = YES;
break;
}
}
if (!isMatch)
{
NSArray * outputs = [[[AVAudioSession sharedInstance] currentRoute] outputs];
for (AVAudioSessionPortDescription * desc in outputs)
{
if ([[desc portType] isEqualToString:AVAudioSessionPortBluetoothA2DP] || [[desc portType] isEqualToString:AVAudioSessionPortBluetoothHFP])
{
isMatch = YES;
break;
}
}
}
NSError * error = nil;
[[AVAudioSession sharedInstance] setCategory:categoryString
withOptions:categoryOptions
error:&error];
return isMatch;
}
It works well but cause another problem: when music is playing, using this method to detect HFP connection will make music playing interrupt for about two seconds.
So I tried another way which can reduce the effect of detecting HFP connecting. I am using a flag
static BOOL isHFPConnectedFlag
To indicate whether HFP or A2DP is connected. I use previous method to detect the connection only once (when the app is launching) and save the result into isHFPConnectedFlag. What's more, I observe the AudioSessionRouteChange to sync the connection status:
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(handleAudioSessionRouteChangeWithState:) name:AVAudioSessionRouteChangeNotification object:nil];
When the route change reason is AVAudioSessionRouteChangeReasonNewDeviceAvailable or AVAudioSessionRouteChangeReasonOldDeviceUnavailable I can know HFP is connected or disconnected. Unfortunately, when I connect some HFP in my iPhone, the system will not post this notification, so I cannot detect the connection in this situation.
Does anyone know the reason or a better way to implements this (Detecting HFP connection without music playing interrupting)?
you can use like this!
-(BOOL) bluetoothDeviceA2DPAvailable {
BOOL available = NO;
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
AVAudioSessionRouteDescription *currentRoute = [audioSession currentRoute];
for (AVAudioSessionPortDescription *output in currentRoute.outputs) {
if (([[output portType] isEqualToString:AVAudioSessionPortBluetoothA2DP] ||
[[output portType] isEqualToString:AVAudioSessionPortBluetoothHFP])) {
available = YES;
break;
}
}
return available;
}
Swift 5 version:
func bluetoothDeviceHFPAvailable() -> Bool {
let audioSession = AVAudioSession.sharedInstance()
let currentRoute = audioSession.currentRoute
for output in currentRoute.outputs {
if output.portType == .bluetoothHFP || output.portType == .bluetoothA2DP {
return true
}
}
return false
}
My iOS 7 app vocalizes texts when necessary.
What I'd like to do is enable the user to listen to his music or podcasts (or any other app using audio) while mine is running.
The expected behavior is that others audio either mixe or duck when my app speaks, then the others audio resume his volume at the initial level right after.
I have tried many ways to achieve this goal, but nothing is good enough, as I list the issues I face, after the code.
My current implementation is based on creating a session prior to play or text-to-speech as follows:
+ (void)setAudioActive {
[[self class] setSessionActiveWithMixing:YES];
}
After the play/speech, I set i to idled as follows:
+ (void)setAudioIdle {
[[self class] setSessionActiveWithMixing:NO];
}
The core function which handle the session setup accordingly to the active parameter, as follows:
+ (void)setSessionActiveWithMixing:(BOOL)active
{
NSError *error = NULL;
BOOL success;
AVAudioSession *session = [AVAudioSession sharedInstance];
static NSInteger counter = 0;
success = [session setActive:NO error:&error];
if (error) {
DDLogError(#"startAudioMixAndBackground: session setActive:NO, %#", error.description);
}
else {
counter--; if (counter<0) counter = 0;
}
if (active) {
AVAudioSessionCategoryOptions options = AVAudioSessionCategoryOptionAllowBluetooth
//|AVAudioSessionCategoryOptionDefaultToSpeaker
|AVAudioSessionCategoryOptionDuckOthers
;
success = [session setCategory://AVAudioSessionCategoryPlayback
AVAudioSessionCategoryPlayAndRecord
withOptions: options
error: &error];
if (error) {
// Do some error handling
DDLogError(#"startAudioMixAndBackground: setCategory:AVAudioSessionCategoryPlayback, %#", error.description);
}
else {
//activate the audio session
success = [session setActive:YES error:&error];
if (error) {
DDLogError(#"startAudioMixAndBackground: session setActive:YES, %#", error.description);
}
else {
counter++;
}
}
}
DDLogInfo(#"Audio session counter is: %ld",counter);
}
My current issues are:
1) When my app start to speak, I hear some king of glitch in the sound, which makes it not nice;
2) When I connect the route to bluetooth, the underlying audio (say the Podcast or ipod music) goes very low and sounds noisy, which make my solution merely unusable, my users will reject this level au poor quality.
3) When others bluetooth connected devices tried to emit sounds (say a GPS in a car or instance), my App does not receive any interrupts (or I handle it wrongly), see my code as follows:
- (void)startAudioMixAndBackground {
// initialize our AudioSession -
// this function has to be called once before calling any other AudioSession functions
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(audioSessionDidChangeInterruptionType:)
name:AVAudioSessionInterruptionNotification object:[AVAudioSession sharedInstance]];
// set our default audio session state
[[self class] setAudioIdle];
[[UIApplication sharedApplication] beginReceivingRemoteControlEvents];
if ([self canBecomeFirstResponder]) {
[self becomeFirstResponder];
}
#synchronized(self) {
self.okToPlaySound = YES;
}
//MPVolumeSettingsAlertShow();
}
// want remote control events (via Control Center, headphones, bluetooth, AirPlay, etc.)
- (void)remoteControlReceivedWithEvent:(UIEvent *)event
{
if (event.type == UIEventTypeRemoteControl)
{
switch(event.subtype)
{
case UIEventSubtypeRemoteControlPause:
case UIEventSubtypeRemoteControlStop:
[[self class] setAudioIdle];
break;
case UIEventSubtypeRemoteControlPlay:
[[self class] setAudioActive];
break;
default:
break;
}
}
}
#pragma mark - Audio Support
- (void)audioSessionDidChangeInterruptionType:(NSNotification *)notification
{
AVAudioSessionInterruptionType interruptionType = [[[notification userInfo]
objectForKey:AVAudioSessionInterruptionTypeKey] unsignedIntegerValue];
if (AVAudioSessionInterruptionTypeBegan == interruptionType)
{
DDLogVerbose(#"Session interrupted: --- Begin Interruption ---");
}
else if (AVAudioSessionInterruptionTypeEnded == interruptionType)
{
DDLogVerbose(#"Session interrupted: --- End Interruption ---");
}
}
Your issue is most likely due to the category you are setting: AVAudioSessionCategoryPlayAndRecord. The PlayAndRecord category does not allow your app to mix/duck audio with other apps. You should reference the docs on Audio Session Categories again here: https://developer.apple.com/library/ios/documentation/avfoundation/reference/AVAudioSession_ClassReference/Reference/Reference.html. It seems like AVAudioSessionCategoryAmbient is probably more what your looking for.