iOS SWIFT - WebRTC change from Front Camera to back Camera - ios

WebRTC video by default uses Front Camera, which works fine. However, i need to switch it to back camera, and i have not been able to find any code to do that.
Which part do i need to edit?
Is it the localView or localVideoTrack or capturer?

Swift 3.0
Peer connection can have only one 'RTCVideoTrack' for sending video stream.
At first, for change camera front/back you must remove current video track on peer connection.
After then, you create new 'RTCVideoTrack' on camera which you need, and set this for peer connection.
I used this methods.
func swapCameraToFront() {
let localStream: RTCMediaStream? = peerConnection?.localStreams.first as? RTCMediaStream
localStream?.removeVideoTrack(localStream?.videoTracks.first as! RTCVideoTrack)
let localVideoTrack: RTCVideoTrack? = createLocalVideoTrack()
if localVideoTrack != nil {
localStream?.addVideoTrack(localVideoTrack)
delegate?.appClient(self, didReceiveLocalVideoTrack: localVideoTrack!)
}
peerConnection?.remove(localStream)
peerConnection?.add(localStream)
}
func swapCameraToBack() {
let localStream: RTCMediaStream? = peerConnection?.localStreams.first as? RTCMediaStream
localStream?.removeVideoTrack(localStream?.videoTracks.first as! RTCVideoTrack)
let localVideoTrack: RTCVideoTrack? = createLocalVideoTrackBackCamera()
if localVideoTrack != nil {
localStream?.addVideoTrack(localVideoTrack)
delegate?.appClient(self, didReceiveLocalVideoTrack: localVideoTrack!)
}
peerConnection?.remove(localStream)
peerConnection?.add(localStream)
}

As of now I only have the answer in Objective C language in regard to Ankit's comment below. I will convert it into Swift after some time.
You can check the below code
- (RTCVideoTrack *)createLocalVideoTrack {
RTCVideoTrack *localVideoTrack = nil;
NSString *cameraID = nil;
for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if (captureDevice.position == AVCaptureDevicePositionFront) {
cameraID = [captureDevice localizedName]; break;
}
}
RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:cameraID];
RTCMediaConstraints *mediaConstraints = [self defaultMediaStreamConstraints];
RTCVideoSource *videoSource = [_factory videoSourceWithCapturer:capturer constraints:mediaConstraints];
localVideoTrack = [_factory videoTrackWithID:#"ARDAMSv0" source:videoSource];
return localVideoTrack;
}
- (RTCVideoTrack *)createLocalVideoTrackBackCamera {
RTCVideoTrack *localVideoTrack = nil;
//AVCaptureDevicePositionFront
NSString *cameraID = nil;
for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if (captureDevice.position == AVCaptureDevicePositionBack) {
cameraID = [captureDevice localizedName];
break;
}
}
RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:cameraID];
RTCMediaConstraints *mediaConstraints = [self defaultMediaStreamConstraints];
RTCVideoSource *videoSource = [_factory videoSourceWithCapturer:capturer constraints:mediaConstraints];
localVideoTrack = [_factory videoTrackWithID:#"ARDAMSv0" source:videoSource];
return localVideoTrack;
}

If you decide to use official Google build here the explanation:
First, you must configure your camera before call start, best place to do that in ARDVideoCallViewDelegate in method didCreateLocalCapturer
- (void)startCapture:(void (^)(BOOL succeeded))completionHandler {
AVCaptureDevicePosition position = _usingFrontCamera ? AVCaptureDevicePositionFront : AVCaptureDevicePositionBack;
__weak AVCaptureDevice *device = [self findDeviceForPosition:position];
if ([device lockForConfiguration:nil]) {
if ([device isFocusPointOfInterestSupported]) {
[device setFocusModeLockedWithLensPosition:0.9 completionHandler: nil];
}
}
AVCaptureDeviceFormat *format = [self selectFormatForDevice:device];
if (format == nil) {
RTCLogError(#"No valid formats for device %#", device);
NSAssert(NO, #"");
return;
}
NSInteger fps = [self selectFpsForFormat:format];
[_capturer startCaptureWithDevice: device
format: format
fps:fps completionHandler:^(NSError * error) {
NSLog(#"%#",error);
if (error == nil) {
completionHandler(true);
}
}];
}
Don't forget enabling capture device is asynchronous, sometime better to use completion to be sure everything done as expected.

I am not sure which chrome version you are using for webrtc but with v54 and above there is "bool" property called "useBackCamera" in RTCAVFoundationVideoSource class. You can make use of this property to switch between front/back camera.

Swift 4.0 & 'GoogleWebRTC' : '1.1.20913'
RTCAVFoundationVideoSource class has a property named useBackCamera that can be used for switching the camera used.
#interface RTCAVFoundationVideoSource : RTCVideoSource
- (instancetype)init NS_UNAVAILABLE;
/**
* Calling this function will cause frames to be scaled down to the
* requested resolution. Also, frames will be cropped to match the
* requested aspect ratio, and frames will be dropped to match the
* requested fps. The requested aspect ratio is orientation agnostic and
* will be adjusted to maintain the input orientation, so it doesn't
* matter if e.g. 1280x720 or 720x1280 is requested.
*/
- (void)adaptOutputFormatToWidth:(int)width height:(int)height fps:(int)fps;
/** Returns whether rear-facing camera is available for use. */
#property(nonatomic, readonly) BOOL canUseBackCamera;
/** Switches the camera being used (either front or back). */
#property(nonatomic, assign) BOOL useBackCamera;
/** Returns the active capture session. */
#property(nonatomic, readonly) AVCaptureSession *captureSession;
Below is the implementation for switching camera.
var useBackCamera: Bool = false
func switchCamera() {
useBackCamera = !useBackCamera
self.switchCamera(useBackCamera: useBackCamera)
}
private func switchCamera(useBackCamera: Bool) -> Void {
let localStream = peerConnection?.localStreams.first
if let videoTrack = localStream?.videoTracks.first {
localStream?.removeVideoTrack(videoTrack)
}
let localVideoTrack = createLocalVideoTrack(useBackCamera: useBackCamera)
localStream?.addVideoTrack(localVideoTrack)
self.delegate?.webRTCClientDidAddLocal(videoTrack: localVideoTrack)
if let ls = localStream {
peerConnection?.remove(ls)
peerConnection?.add(ls)
}
}
func createLocalVideoTrack(useBackCamera: Bool) -> RTCVideoTrack {
let videoSource = self.factory.avFoundationVideoSource(with: self.constraints)
videoSource.useBackCamera = useBackCamera
let videoTrack = self.factory.videoTrack(with: videoSource, trackId: "video")
return videoTrack
}

In the current version of WebRTC, RTCAVFoundationVideoSource has been deprecated and replaced with a
generic RTCVideoSource combined with an RTCVideoCapturer implementation.
In order to switch the camera I'm doing this:
- (void)switchCameraToPosition:(AVCaptureDevicePosition)position completionHandler:(void (^)(void))completionHandler {
if (self.cameraPosition != position) {
RTCMediaStream *localStream = self.peerConnection.localStreams.firstObject;
[localStream removeVideoTrack:self.localVideoTrack];
//[self.peerConnection removeStream:localStream];
self.localVideoTrack = [self createVideoTrack];
[self startCaptureLocalVideoWithPosition:position completionHandler:^{
[localStream addVideoTrack:self.localVideoTrack];
//[self.peerConnection addStream:localStream];
if (completionHandler) {
completionHandler();
}
}];
self.cameraPosition = position;
}
}
Take a look at the commented lines, If you start adding/removing the stream from the peer connection it will cause a delay in the video connection.
I'm using GoogleWebRTC-1.1.25102

Related

DJI SDK waypoint Missions - swift, IOS

I want to program a system where co-ordinates can be passed to the drone as waypoints and the drone will carry out the actions. The DJI API is documented with OBJ-c and while the concepts will be the same im struggling to understand how a mission is programmed.
If someone can help me with the basic structure of a waypoint mission and passing this to the drone it would be very helpful. Maybe I'm not understanding things well but the DJI API doesn't seem to be very descriptive of how things work.
I'm not asking to be spoon fed but someone with insight who could give me an explanation
This is how I wrote the waypoint function with swift and you can read code comments from below to understand the mission specification!
/// Build a waypoint function that allows the drone to move between points according to its altitude, longitude, and latitude
func waypointMission() -> DJIWaypointMission? {
/// Define a new object class for the waypoint mission
let mission = DJIMutableWaypointMission()
mission.maxFlightSpeed = 15
mission.autoFlightSpeed = 8
mission.finishedAction = .noAction
mission.headingMode = .usingInitialDirection
mission.flightPathMode = .normal
mission.rotateGimbalPitch = false /// Change this to True if you want the camera gimbal pitch to move between waypoints
mission.exitMissionOnRCSignalLost = true
mission.gotoFirstWaypointMode = .pointToPoint
mission.repeatTimes = 1
/// Keep listening to the drone location included in latitude and longitude
guard let droneLocationKey = DJIFlightControllerKey(param: DJIFlightControllerParamAircraftLocation) else {
return nil
}
guard let droneLocationValue = DJISDKManager.keyManager()?.getValueFor(droneLocationKey) else {
return nil
}
let droneLocation = droneLocationValue.value as! CLLocation
let droneCoordinates = droneLocation.coordinate
/// Check if the returned coordinate value is valid or not
if !CLLocationCoordinate2DIsValid(droneCoordinates) {
return nil
}
mission.pointOfInterest = droneCoordinates
let loc1 = CLLocationCoordinate2DMake(droneCoordinates.latitude, droneCoordinates.longitude)
let waypoint1 = DJIWaypoint(coordinate: loc1)
waypoint1.altitude = 2.0 /// The altitude which the drone flies to as the first point and should be of type float
waypoint1.heading = 0 /// This is between [-180, 180] degrees, where the drone moves when reaching a waypoint. 0 means don't change the drone's heading
waypoint1.actionRepeatTimes = 1 /// Repeat this mission just for one time
waypoint1.actionTimeoutInSeconds = 60
// waypoint1.cornerRadiusInMeters = 5
waypoint1.turnMode = .clockwise /// When the drones changing its heading. It moves clockwise
waypoint1.gimbalPitch = 0 /// This is between [-90, 0] degrees, if you want to change this value, then change rotateGimbalPitch to True. The drone gimbal will move by the value when the drone reaches its waypoint
waypoint1.speed = 0.5 /// Note that this value does not make the drone move with speed 0.5 m/s because this is an error from the firmware and can't be fixed. However, we need to trigger it to be able to change the next one
let loc2 = CLLocationCoordinate2DMake(droneCoordinates.latitude, droneCoordinates.longitude)
let waypoint2 = DJIWaypoint(coordinate: loc2)
waypoint2.altitude = 15.0 /// should be of type float
waypoint2.heading = 0
waypoint2.actionRepeatTimes = 1
waypoint2.actionTimeoutInSeconds = 60
//waypoint2.cornerRadiusInMeters = 5
waypoint2.turnMode = .clockwise
waypoint2.gimbalPitch = 0
/// Chnage the velocity of the drone while moving to this waypoint
waypoint2.speed = 0.5
mission.add(waypoint1)
mission.add(waypoint2)
return DJIWaypointMission(mission: mission)
}
Now when you call the previous function, you need to pass it to the timeline of the mission, and here is an example flow of a full timeline mission
/// Set up the drone misison and strart its timeline
func goUpVerticallyMission() {
/// Check if the drone is connected
let product = DJISDKManager.product()
if (product?.model) != nil {
/// This is the array that holds all the timline elements to be executed later in order
var elements = [DJIMissionControlTimelineElement]()
/// Reset Gimbal Position
let attitude = DJIGimbalAttitude(pitch: 0.0, roll: 0.0, yaw: 0.0)
let pitchAction: DJIGimbalAttitudeAction = DJIGimbalAttitudeAction(attitude: attitude)!
elements.append(pitchAction) // task number 0
let takeOff = DJITakeOffAction()
elements.append(takeOff) // task number 1
/// Set up and start a new waypoint mission
var mission: DJIWaypointMission?
guard let result = self.waypointMission() else {return}
mission = result
elements.append(mission!) // task number 2
/// After the waypoint mission finishes stop recording the video
let stopVideoAction = DJIRecordVideoAction(stopRecordVideo: ())
elements.append(stopVideoAction!) // task number 3
/// This is landing action after finishing the task
// let landAction = DJILandAction()
// elements.append(landAction)
/// This is the go home and landing action in which the drone goes back to its starting point when the first time started the mission
let goHomeLandingAction = DJIGoHomeAction()
elements.append(goHomeLandingAction) /// task number 4
/// Check if there is any error while appending the timeline mission to the array.
let error = DJISDKManager.missionControl()?.scheduleElements(elements)
if error != nil {
print("Error detected with the mission")
} else {
/// If there is no error then start the mission
DispatchQueue.main.asyncAfter(deadline: .now()) {
DJISDKManager.missionControl()?.startTimeline()
}
}
}
}
Please let me know if you misunderstand anything or you need any help with other things!!!
Look at the dji example app on github:
https://github.com/dji-sdk/Mobile-SDK-iOS/tree/master/Sample%20Code/ObjcSampleCode/DJISdkDemo/Demo/MissionManager
//
// WaypointMissionViewController.m
// DJISdkDemo
//
// Copyright © 2015 DJI. All rights reserved.
//
/**
* This file demonstrates the process to start a waypoint mission. In this demo,
* the aircraft will go to four waypoints, shoot photos and record videos.
* The flight speed can be controlled by calling the class method
* setAutoFlightSpeed:withCompletion:. In this demo, when the aircraft will
* change the speed right after it reaches the second point (point with index 1).
*
* CAUTION: it is highly recommended to run this sample using the simulator.
*/
#import <DJISDK/DJISDK.h>
#import "DemoUtility.h"
#import "WaypointMissionViewController.h"
#define ONE_METER_OFFSET (0.00000901315)
#interface WaypointMissionViewController ()
#property (nonatomic) DJIWaypointMissionOperator *wpOperator;
#property (nonatomic) DJIWaypointMission *downloadMission;
#end
#implementation WaypointMissionViewController
-(void)viewWillAppear:(BOOL)animated {
[super viewWillAppear:animated];
self.wpOperator = [[DJISDKManager missionControl] waypointMissionOperator];
}
-(void)viewWillDisappear:(BOOL)animated {
[super viewWillDisappear:animated];
[self.wpOperator removeListenerOfExecutionEvents:self];
}
/**
* Because waypoint mission is refactored and uses a different interface design
* from the other missions, we need to override the UI actions.
*/
-(void)onPrepareButtonClicked:(id)sender {
DJIWaypointMission *wp = (DJIWaypointMission *)[self initializeMission];
NSError *error = [self.wpOperator loadMission:wp];
if (error) {
ShowResult(#"Prepare Mission Failed:%#", error);
return;
}
WeakRef(target);
[self.wpOperator addListenerToUploadEvent:self withQueue:nil andBlock:^(DJIWaypointMissionUploadEvent * _Nonnull event) {
WeakReturn(target);
[target onUploadEvent:event];
}];
[self.wpOperator uploadMissionWithCompletion:^(NSError * _Nullable error) {
WeakReturn(target);
if (error) {
ShowResult(#"ERROR: uploadMission:withCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: uploadMission:withCompletion:.");
}
}];
}
- (IBAction)onStartButtonClicked:(id)sender {
WeakRef(target);
[self.wpOperator addListenerToExecutionEvent:self withQueue:nil andBlock:^(DJIWaypointMissionExecutionEvent * _Nonnull event) {
[target showWaypointMissionProgress:event];
}];
[self.wpOperator startMissionWithCompletion:^(NSError * _Nullable error) {
if (error) {
ShowResult(#"ERROR: startMissionWithCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: startMissionWithCompletion:. ");
}
[self missionDidStart:error];
}];
}
-(void)onStopButtonClicked:(id)sender {
WeakRef(target);
[self.wpOperator stopMissionWithCompletion:^(NSError * _Nullable error) {
WeakReturn(target);
if (error) {
ShowResult(#"ERROR: stopMissionExecutionWithCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: stopMissionExecutionWithCompletion:. ");
}
[target missionDidStop:error];
}];
}
-(void)onDownloadButtonClicked:(id)sender {
self.downloadMission = nil;
WeakRef(target);
[self.wpOperator addListenerToDownloadEvent:self
withQueue:nil
andBlock:^(DJIWaypointMissionDownloadEvent * _Nonnull event)
{
if (event.progress.downloadedWaypointIndex == event.progress.totalWaypointCount) {
ShowResult(#"SUCCESS: the waypoint mission is downloaded. ");
target.downloadMission = target.wpOperator.loadedMission;
[target.wpOperator removeListenerOfDownloadEvents:target];
[target.progressBar setHidden:YES];
[target mission:target.downloadMission didDownload:event.error];
}
else if (event.error) {
ShowResult(#"Download Mission Failed:%#", event.error);
[target.progressBar setHidden:YES];
[target mission:target.downloadMission didDownload:event.error];
[target.wpOperator removeListenerOfDownloadEvents:target];
} else {
[target.progressBar setHidden:NO];
float progress = ((float)event.progress.downloadedWaypointIndex + 1) / (float)event.progress.totalWaypointCount;
NSLog(#"Download Progress:%d%%", (int)(progress*100));
[target.progressBar setProgress:progress];
}
}];
[self.wpOperator downloadMissionWithCompletion:^(NSError * _Nullable error) {
WeakReturn(target);
if (error) {
ShowResult(#"ERROR: downloadMissionWithCompletion:withCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: downloadMissionWithCompletion:withCompletion:.");
}
}];
}
-(void)onPauseButtonClicked:(id)sender {
[self missionWillPause];
[self.wpOperator pauseMissionWithCompletion:^(NSError * _Nullable error) {
if (error) {
ShowResult(#"ERROR: pauseMissionWithCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: pauseMissionWithCompletion:. ");
}
}];
}
-(void)onResumeButtonClicked:(id)sender {
WeakRef(target);
[self.wpOperator resumeMissionWithCompletion:^(NSError * _Nullable error) {
WeakReturn(target);
if (error) {
ShowResult(#"ERROR: resumeMissionWithCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: resumeMissionWithCompletion:. ");
}
[target missionDidResume:error];
}];
}
/**
* Prepare the waypoint mission. The basic workflow is:
* 1. Create an instance of DJIWaypointMission.
* 2. Create coordinates.
* 3. Use the coordinate to create an instance of DJIWaypoint.
* 4. Add actions for each waypoint.
* 5. Add the waypoints into the mission.
*/
-(DJIMission*) initializeMission {
// Step 1: create mission
DJIMutableWaypointMission* mission = [[DJIMutableWaypointMission alloc] init];
mission.maxFlightSpeed = 15.0;
mission.autoFlightSpeed = 4.0;
// Step 2: prepare coordinates
CLLocationCoordinate2D northPoint;
CLLocationCoordinate2D eastPoint;
CLLocationCoordinate2D southPoint;
CLLocationCoordinate2D westPoint;
northPoint = CLLocationCoordinate2DMake(self.homeLocation.latitude + 10 * ONE_METER_OFFSET, self.homeLocation.longitude);
eastPoint = CLLocationCoordinate2DMake(self.homeLocation.latitude, self.homeLocation.longitude + 10 * ONE_METER_OFFSET);
southPoint = CLLocationCoordinate2DMake(self.homeLocation.latitude - 10 * ONE_METER_OFFSET, self.homeLocation.longitude);
westPoint = CLLocationCoordinate2DMake(self.homeLocation.latitude, self.homeLocation.longitude - 10 * ONE_METER_OFFSET);
// Step 3: create waypoints
DJIWaypoint* northWP = [[DJIWaypoint alloc] initWithCoordinate:northPoint];
northWP.altitude = 10.0;
DJIWaypoint* eastWP = [[DJIWaypoint alloc] initWithCoordinate:eastPoint];
eastWP.altitude = 20.0;
DJIWaypoint* southWP = [[DJIWaypoint alloc] initWithCoordinate:southPoint];
southWP.altitude = 30.0;
DJIWaypoint* westWP = [[DJIWaypoint alloc] initWithCoordinate:westPoint];
westWP.altitude = 40.0;
// Step 4: add actions
[northWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeRotateGimbalPitch param:-60]];
[northWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeShootPhoto param:0]];
[eastWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeShootPhoto param:0]];
[southWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeRotateAircraft param:60]];
[southWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeStartRecord param:0]];
[westWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeStopRecord param:0]];
// Step 5: add waypoints into the mission
[mission addWaypoint:northWP];
[mission addWaypoint:eastWP];
[mission addWaypoint:southWP];
[mission addWaypoint:westWP];
return mission;
}
- (void)onUploadEvent:(DJIWaypointMissionUploadEvent *) event
{
if (event.currentState == DJIWaypointMissionStateReadyToExecute) {
ShowResult(#"SUCCESS: the whole waypoint mission is uploaded.");
[self.progressBar setHidden:YES];
[self.wpOperator removeListenerOfUploadEvents:self];
}
else if (event.error) {
ShowResult(#"ERROR: waypoint mission uploading failed. %#", event.error.description);
[self.progressBar setHidden:YES];
[self.wpOperator removeListenerOfUploadEvents:self];
}
else if (event.currentState == DJIWaypointMissionStateReadyToUpload ||
event.currentState == DJIWaypointMissionStateNotSupported ||
event.currentState == DJIWaypointMissionStateDisconnected) {
ShowResult(#"ERROR: waypoint mission uploading failed. %#", event.error.description);
[self.progressBar setHidden:YES];
[self.wpOperator removeListenerOfUploadEvents:self];
} else if (event.currentState == DJIWaypointMissionStateUploading) {
[self.progressBar setHidden:NO];
DJIWaypointUploadProgress *progress = event.progress;
float progressInPercent = progress.uploadedWaypointIndex / progress.totalWaypointCount;
[self.progressBar setProgress:progressInPercent];
}
}
-(void) showWaypointMissionProgress:(DJIWaypointMissionExecutionEvent *)event {
NSMutableString* statusStr = [NSMutableString new];
[statusStr appendFormat:#"previousState:%#\n", [[self class] descriptionForMissionState:event.previousState]];
[statusStr appendFormat:#"currentState:%#\n", [[self class] descriptionForMissionState:event.currentState]];
[statusStr appendFormat:#"Target Waypoint Index: %zd\n", (long)event.progress.targetWaypointIndex];
[statusStr appendString:[NSString stringWithFormat:#"Is Waypoint Reached: %#\n",
event.progress.isWaypointReached ? #"YES" : #"NO"]];
[statusStr appendString:[NSString stringWithFormat:#"Execute State: %#\n", [[self class] descriptionForExecuteState:event.progress.execState]]];
if (event.error) {
[statusStr appendString:[NSString stringWithFormat:#"Execute Error: %#", event.error.description]];
[self.wpOperator removeListenerOfExecutionEvents:self];
}
[self.statusLabel setText:statusStr];
}
/**
* Display the information of the mission if it is downloaded successfully.
*/
-(void)mission:(DJIMission *)mission didDownload:(NSError *)error {
if (error) return;
if ([mission isKindOfClass:[DJIWaypointMission class]]) {
// Display information of waypoint mission.
[self showWaypointMission:(DJIWaypointMission*)mission];
}
}
-(void) showWaypointMission:(DJIWaypointMission*)wpMission {
NSMutableString* missionInfo = [NSMutableString stringWithString:#"The waypoint mission is downloaded successfully: \n"];
[missionInfo appendString:[NSString stringWithFormat:#"RepeatTimes: %zd\n", wpMission.repeatTimes]];
[missionInfo appendString:[NSString stringWithFormat:#"HeadingMode: %u\n", (unsigned int)wpMission.headingMode]];
[missionInfo appendString:[NSString stringWithFormat:#"FinishedAction: %u\n", (unsigned int)wpMission.finishedAction]];
[missionInfo appendString:[NSString stringWithFormat:#"FlightPathMode: %u\n", (unsigned int)wpMission.flightPathMode]];
[missionInfo appendString:[NSString stringWithFormat:#"MaxFlightSpeed: %f\n", wpMission.maxFlightSpeed]];
[missionInfo appendString:[NSString stringWithFormat:#"AutoFlightSpeed: %f\n", wpMission.autoFlightSpeed]];
[missionInfo appendString:[NSString stringWithFormat:#"There are %zd waypoint(s). ", wpMission.waypointCount]];
[self.statusLabel setText:missionInfo];
}
+(NSString *)descriptionForMissionState:(DJIWaypointMissionState)state {
switch (state) {
case DJIWaypointMissionStateUnknown:
return #"Unknown";
case DJIWaypointMissionStateExecuting:
return #"Executing";
case DJIWaypointMissionStateUploading:
return #"Uploading";
case DJIWaypointMissionStateRecovering:
return #"Recovering";
case DJIWaypointMissionStateDisconnected:
return #"Disconnected";
case DJIWaypointMissionStateNotSupported:
return #"NotSupported";
case DJIWaypointMissionStateReadyToUpload:
return #"ReadyToUpload";
case DJIWaypointMissionStateReadyToExecute:
return #"ReadyToExecute";
case DJIWaypointMissionStateExecutionPaused:
return #"ExecutionPaused";
}
return #"Unknown";
}
+(NSString *)descriptionForExecuteState:(DJIWaypointMissionExecuteState)state {
switch (state) {
case DJIWaypointMissionExecuteStateInitializing:
return #"Initializing";
break;
case DJIWaypointMissionExecuteStateMoving:
return #"Moving";
case DJIWaypointMissionExecuteStatePaused:
return #"Paused";
case DJIWaypointMissionExecuteStateBeginAction:
return #"BeginAction";
case DJIWaypointMissionExecuteStateDoingAction:
return #"Doing Action";
case DJIWaypointMissionExecuteStateFinishedAction:
return #"Finished Action";
case DJIWaypointMissionExecuteStateCurveModeMoving:
return #"CurveModeMoving";
case DJIWaypointMissionExecuteStateCurveModeTurning:
return #"CurveModeTurning";
case DJIWaypointMissionExecuteStateReturnToFirstWaypoint:
return #"Return To first Point";
default:
break;
}
return #"Unknown";
}
#end

How do you set a device's flash mode in Swift 4?

I'm calling a function to attempt to turn on my device's flash:
private func flashOn(device:AVCaptureDevice)
{
print("flashOn called");
do {
try device.lockForConfiguration()
// line below returns warning 'flashMode' was deprecated in iOS 10.0: Use AVCapturePhotoSettings.flashMode instead.
device.flashMode = AVCaptureDevice.FlashMode.auto
device.unlockForConfiguration()
} catch {
// handle error
print("flash on error");
}
}
Setting device.flashMode to AVCaptureDevice.FlashMode.auto brings up the warning "'flashMode' was deprecated in iOS 10.0: Use AVCapturePhotoSettings.flashMode instead.". Even though it is just a warning, it does not enable the flash when testing my app, so I change that line to:
device.flashMode = AVCaptureDevice.FlashMode.auto
So I set the line to this, like it suggests:
AVCapturePhotoSettings.flashMode = AVCaptureDevice.FlashMode.auto
And I get the error "Instance member 'flashMode' cannot be used on type 'AVCapturePhotoSettings'"
So I have no idea how to set the flash in Xcode version 9 using Swift 4.0. All the answers I've found in Stack Overflow have been for previous versions.
I've been facing the same problem. Unfortunately, many useful methods got deprecated in iOS10 and 11. Here is how I managed to resolve it:
AVCapturePhotoSettings object is unique and cannot be reused, so you need to get new settings every time using this method:
/// the current flash mode
private var flashMode: AVCaptureDevice.FlashMode = .auto
/// Get settings
///
/// - Parameters:
/// - camera: the camera
/// - flashMode: the current flash mode
/// - Returns: AVCapturePhotoSettings
private func getSettings(camera: AVCaptureDevice, flashMode: AVCaptureDevice.FlashMode) -> AVCapturePhotoSettings {
let settings = AVCapturePhotoSettings()
if camera.hasFlash {
settings.flashMode = flashMode
}
return settings
}
As you can see, lockConfiguration is not needed.
Then simply use it while capturing photo:
#IBAction func captureButtonPressed(_ sender: UIButton) {
let settings = getSettings(camera: camera, flashMode: flashMode)
photoOutput.capturePhoto(with: settings, delegate: self)
}
Hope it will help.
For objective C:
- (IBAction)turnTorchOn: (UIButton *) sender {
sender.selected = !sender.selected;
BOOL on;
if (sender.selected) {
on = true;
} else {
on = false;
}
// check if flashlight available
Class captureDeviceClass = NSClassFromString(#"AVCaptureDevice");
if (captureDeviceClass != nil) {
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCapturePhotoSettings *photosettings = [AVCapturePhotoSettings photoSettings];
if ([device hasTorch] && [device hasFlash]) {
[device lockForConfiguration:nil];
if (on) {
[device setTorchMode:AVCaptureTorchModeOn];
photosettings.flashMode = AVCaptureFlashModeOn;
//torchIsOn = YES; //define as a variable/property if you need to know status
} else {
[device setTorchMode:AVCaptureTorchModeOff];
photosettings.flashMode = AVCaptureFlashModeOn;
//torchIsOn = NO;
}
[device unlockForConfiguration];
}
}
}

Toggle camera from back to front while continuing recording with Objective C

I have created an iOS app where I can take videos and photos and save them to my gallery on my iPhone. My problem is that I want to keep recording while taking a video before saving. It is an option recently added to WhatsApp or other apps, but I can't figure out how to do it.
-(BOOL)toggleCamera
{
BOOL success=NO;
if (self.cameraCount>1)
{
NSError* error;
AVCaptureDeviceInput* newVedioInput;
AVCaptureDevicePosition position=[self.videoInput.device position];
if (position==AVCaptureDevicePositionBack)
{
newVedioInput=[[AVCaptureDeviceInput alloc] initWithDevice:self.frontFacingCamera error:&error];
}
else if (position==AVCaptureDevicePositionFront)
{
newVedioInput=[[AVCaptureDeviceInput alloc] initWithDevice:self.backFacingCamera error:&error];
}
else
{
return NO;
}
if (newVedioInput!=nil)
{
[self.session beginConfiguration];
[self.session removeInput:self.videoInput];
if ([self.session canAddInput:newVedioInput])
{
[self.session addInput:newVedioInput];
self.videoInput=newVedioInput;
}
else
{
[self.session addInput:self.videoInput];
}
[self.session commitConfiguration];
success=YES;
}
else if (error)
{
if ([self.delegate respondsToSelector:#selector(recordManager:didFailWithError:)])
{
[self.delegate recordManager:self didFailWithError:error];
}
}
}
return success;
}
When I do this, the session changes and the video is saved before flipping the camera and that is not what I need. Can anyone help?
You don't need to change the output file.
You need to reconfigure the AVCaptureDeviceInput input of the AVCaptureSession by enclosing the configuration code in beginConfiguration() and commitConfiguration() method.
Dispatch the below work to session's private queue using it's sync
method. eg: sessionQueue.sync{}
self.session.beginConfiguration()
if let videoInput = self.videoInput {
self.session.removeInput(videoInput)
}
addVideoInput() //adding `AVCaptureDeviceInput` again
configureSessionQuality() //Configuring session preset here.
// Fix initial frame having incorrect orientation
let connection = self.videoOutput?.connection(with: .video)
if connection?.isVideoOrientationSupported == true {
connection?.videoOrientation = self.orientation
}
//Fix preview mirroring
if self.cameraLocation == .front {
if connection?.isVideoMirroringSupported == true {
connection?.isVideoMirrored = true
}
}
self.session.commitConfiguration()
//Change zoom scale if needed.
do {
try self.captureDevice?.lockForConfiguration()
self.captureDevice?.videoZoomFactor = zoomScale
self.captureDevice?.unlockForConfiguration()
} catch {
print(error)
}
I am using the above approach in my production code. It is working
perfectly fine

How do I convert voice to text in iOS [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
As far as I know , apple native framework doesn't have APIs for converting voice to text and we have to go for third party framework to do that and it has so many drawbacks like user has to microphone to convert from voice to text.
But I can find lots of information for converting text to voice but not the other way
Couldn't find any clear information about this and mostly it has so many uncertain things.
If someone could shed some light it'd be really great !
For Objective C, I wrote a Speech Converter class a while back to convert voice to text.
Step 1: Create A Speech Convertor class
Create a new Cocoa Class and subclass it from NSObject.
Name it let's say ATSpeechRecognizer.
In ATSpeechRecognizer.h:
#import <Foundation/Foundation.h>
#import <Speech/Speech.h>
#import <AVFoundation/AVFoundation.h>
typedef NS_ENUM(NSInteger, ATSpeechRecognizerState) {
ATSpeechRecognizerStateRunning,
ATSpeechRecognizerStateStopped
};
#protocol ATSpeechDelegate<NSObject>
#required
/*This method relays parsed text from Speech to the delegate responder class*/
-(void)convertedSpeechToText:(NSString *) parsedText;
/*This method relays change in Speech recognition ability to delegate responder class*/
-(void) speechRecAvailabilityChanged:(BOOL) status;
/*This method relays error messages to delegate responder class*/
-(void) sendErrorInfoToViewController:(NSString *) errorMessage;
#optional
/*This method relays info regarding whether speech rec is running or stopped to delegate responder class. State with be either ATSpeechRecognizerStateRunning or ATSpeechRecognizerStateStopped. You may or may not implement this method*/
-(void) changeStateIndicator:(ATSpeechRecognizerState) state;
#end
#interface ATSpeechRecognizer : NSObject <SFSpeechRecognizerDelegate>
+ (ATSpeechRecognizer *)sharedObject;
/*Delegate to communicate with requesting VCs*/
#property (weak, nonatomic) id<ATSpeechDelegate> delegate;
/*Class Methods*/
-(void) toggleRecording;
-(void) activateSpeechRecognizerWithLocaleIdentifier:(NSString *) localeIdentifier andBlock:(void (^)(BOOL isAuthorized))successBlock;
#end
And in ATSpeechRecognizer.m:
#import "ATSpeechRecognizer.h"
#interface ATSpeechRecognizer ()
/*This object handles the speech recognition requests. It provides an audio input to the speech recognizer.*/
#property SFSpeechAudioBufferRecognitionRequest *speechAudioRecRequest;
/*The recognition task where it gives you the result of the recognition request. Having this object is handy as you can cancel or stop the task. */
#property SFSpeechRecognitionTask *speechRecogTask;
/*This is your Speech recognizer*/
#property SFSpeechRecognizer *speechRecognizer;
/*This is your audio engine. It is responsible for providing your audio input.*/
#property AVAudioEngine *audioEngine;
#end
#implementation ATSpeechRecognizer
#pragma mark - Constants
//Error Messages
#define kErrorMessageAuthorize #"You declined the permission to perform speech Permission. Please authorize the operation in your device settings."
#define kErrorMessageRestricted #"Speech recognition isn't available on this OS version. Please upgrade to iOS 10 or later."
#define kErrorMessageNotDetermined #"Speech recognition isn't authorized yet"
#define kErrorMessageAudioInputNotFound #"This device has no audio input node"
#define kErrorMessageRequestFailed #"Unable to create an SFSpeechAudioBufferRecognitionRequest object"
#define kErrorMessageAudioRecordingFailed #"Unable to start Audio recording due to failure in Recording Engine"
#pragma mark - Singleton methods
+ (ATSpeechRecognizer *)sharedObject {
static ATSpeechRecognizer *sharedClass = nil;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
sharedClass = [[self alloc] init];
});
return sharedClass;
}
- (id)init {
if (self = [super init]) {
}
return self;
}
#pragma mark - Recognition methods
-(void) activateSpeechRecognizerWithLocaleIdentifier:(NSString *) localeIdentifier andBlock:(void (^)(BOOL isAuthorized))successBlock{
//enter Described language here
if([localeIdentifier length]>0){
NSLocale *locale = [[NSLocale alloc] initWithLocaleIdentifier:localeIdentifier];
_speechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:locale];
_speechRecognizer.delegate = self;
_audioEngine = [[AVAudioEngine alloc] init];
[self getSpeechRecognizerAuthenticationStatusWithSuccessBlock:^(BOOL isAuthorized) {
successBlock(isAuthorized);
}];
}
else{
successBlock(NO);
}
}
/*Microphone usage Must be authorized in the info.plist*/
-(void) toggleRecording{
if(_audioEngine.isRunning){
[self stopAudioEngine];
}
else{
[self startAudioEngine];
}
}
#pragma mark - Internal Methods
/*
In case different buttons are used for recording and stopping, these methods should be called indiviually. Otherwise use -(void) toggleRecording.
*/
-(void) startAudioEngine{
if([self isDelegateValidForSelector:NSStringFromSelector(#selector(changeStateIndicator:))]){
[_delegate changeStateIndicator:ATSpeechRecognizerStateRunning];
}
[self startRecordingSpeech];
}
-(void) stopAudioEngine{
if([self isDelegateValidForSelector:NSStringFromSelector(#selector(changeStateIndicator:))]){
[_delegate changeStateIndicator:ATSpeechRecognizerStateStopped];
}
[_audioEngine stop];
[_speechAudioRecRequest endAudio];
self.speechRecogTask = nil;
self.speechAudioRecRequest = nil;
}
/*
All the voice data is transmitted to Apple’s backend for processing. Therefore, it is mandatory to get the user’s authorization. Speech Recognition Must be authorized in the info.plist
*/
-(void) getSpeechRecognizerAuthenticationStatusWithSuccessBlock:(void (^)(BOOL isAuthorized))successBlock{
[SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status) {
switch (status) {
case SFSpeechRecognizerAuthorizationStatusAuthorized:
successBlock(YES);
break;
case SFSpeechRecognizerAuthorizationStatusDenied:
[self sendErrorMessageToDelegate:kErrorMessageAuthorize];
successBlock(NO);
case SFSpeechRecognizerAuthorizationStatusRestricted:
[self sendErrorMessageToDelegate:kErrorMessageRestricted];
successBlock(NO);
case SFSpeechRecognizerAuthorizationStatusNotDetermined:
[self sendErrorMessageToDelegate:kErrorMessageNotDetermined];
successBlock(NO);
break;
default:
break;
}
}];
}
-(void) startRecordingSpeech{
/*
Check if the Task is running. If yes, Cancel it and start anew
*/
if(_speechRecogTask!=nil){
[_speechRecogTask cancel];
_speechRecogTask = nil;
}
/*
Prepare for the audio recording. Here we set the category of the session as recording, the mode as measurement, and activate it
*/
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
#try {
[audioSession setCategory:AVAudioSessionCategoryRecord error:nil];
[audioSession setMode:AVAudioSessionModeMeasurement error:nil];
[audioSession setActive:YES error:nil];
} #catch (NSException *exception) {
[self sendErrorMessageToDelegate:exception.reason];
}
/*
Instantiate the recognitionRequest. Here we create the SFSpeechAudioBufferRecognitionRequest object. Later, we use it to pass our audio data to Apple’s servers.
*/
#try {
_speechAudioRecRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
} #catch (NSException *exception) {
[self sendErrorMessageToDelegate:kErrorMessageRequestFailed];
}
/*
Check if the audioEngine (your device) has an audio input for recording.
*/
if(_audioEngine.inputNode!=nil){
AVAudioInputNode *inputNode = _audioEngine.inputNode;
/*If true, partial (non-final) results for each utterance will be reported.
Default is true*/
_speechAudioRecRequest.shouldReportPartialResults = YES;
/*Start the recognition by calling the recognitionTask method of our speechRecognizer. This function has a completion handler. This completion handler will be called every time the recognition engine has received input, has refined its current recognition, or has been canceled or stopped, and will return a final transcript.*/
_speechRecogTask = [_speechRecognizer recognitionTaskWithRequest:_speechAudioRecRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {
BOOL isFinal = NO;
if(result!=nil){
if([self isDelegateValidForSelector:NSStringFromSelector(#selector(convertedSpeechToText:))]){
[_delegate convertedSpeechToText:[[result bestTranscription] formattedString]];
}
isFinal = [result isFinal]; //True if the hypotheses will not change; speech processing is complete.
}
//If Error of Completed, end it.
if(error!=nil || isFinal){
[_audioEngine stop];
[inputNode removeTapOnBus:0];
self.speechRecogTask = nil;
self.speechAudioRecRequest = nil;
if(error!=nil){
[self stopAudioEngine];
[self sendErrorMessageToDelegate:[NSString stringWithFormat:#"%li - %#",error.code, error.localizedDescription]];
}
}
}];
/* Add an audio input to the recognitionRequest. Note that it is ok to add the audio input after starting the recognitionTask. The Speech Framework will start recognizing as soon as an audio input has been added.*/
AVAudioFormat *recordingFormat = [inputNode outputFormatForBus:0];
[inputNode installTapOnBus:0 bufferSize:1024 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
[self.speechAudioRecRequest appendAudioPCMBuffer:buffer];
}];
/*Prepare and start the audioEngine.*/
[_audioEngine prepare];
#try {
[_audioEngine startAndReturnError:nil];
} #catch (NSException *exception) {
[self sendErrorMessageToDelegate:kErrorMessageAudioRecordingFailed];
}
}
else{
[self sendErrorMessageToDelegate:kErrorMessageAudioInputNotFound];
}
}
-(BOOL) isDelegateValidForSelector:(NSString*)selectorName{
if(_delegate!=nil && [_delegate respondsToSelector:NSSelectorFromString(selectorName)]){
return YES;
}
return NO;
}
-(void) sendErrorMessageToDelegate:(NSString*) errorMessage{
if([self isDelegateValidForSelector:NSStringFromSelector(#selector(sendErrorInfoToViewController:))]){
[_delegate sendErrorInfoToViewController:errorMessage];
}
}
#pragma mark - Speech Recognizer Delegate Methods
-(void) speechRecognizer:(SFSpeechRecognizer *)speechRecognizer availabilityDidChange:(BOOL)available{
if(!available){
[self stopAudioEngine];
}
[_delegate speechRecAvailabilityChanged:available];
}
And that's it. Now you can use this class anywhere in any project you want to convert voice to text. Just be sure to read the guidance comments if you feel confused about how it works.
Step 2: Set up the ATSpeechRecognizer Class in your VC
Import ATSpeechRecognizer in your View Controller and set up the delegate like this:
#import "ATSpeechRecognizer.h"
#interface ViewController : UIViewController <ATSpeechDelegate>{
BOOL isRecAllowed;
}
Use the following method on VC viewDidLoad to set it up and running:
-(void) setUpSpeechRecognizerService{
[ATSpeechRecognizer sharedObject].delegate = self;
[[ATSpeechRecognizer sharedObject] activateSpeechRecognizerWithLocaleIdentifier:#"en-US" andBlock:^(BOOL isAuthorized) {
isRecAllowed = isAuthorized; /*Is operation allowed or not?*/
}];
}
Now set up delegate methods:
#pragma mark - Speech Recog Delegates
-(void) convertedSpeechToText:(NSString *)parsedText{
if(parsedText!=nil){
_txtView.text = parsedText; //You got Text from voice. Use it as you want
}
}
-(void) speechRecAvailabilityChanged:(BOOL)status{
isRecAllowed = status; //Status of Conversion ability has changed. Use Status flag to allow/stop operations
}
-(void) changeStateIndicator:(ATSpeechRecognizerState) state{
if(state==ATSpeechRecognizerStateStopped){
//Speech Recognizer is Stopped
_lblState.text = #"Stopped";
}
else{
//Speech Recognizer is running
_lblState.text = #"Running";
}
_txtView.text = #"";
}
-(void) sendErrorInfoToViewController:(NSString *)errorMessage{
[self showPopUpForErrorMessage:errorMessage]; /*Some error occured. Show it to user*/
}
To Start Conversion of Voice to Text:
- (IBAction)btnRecordTapped:(id)sender {
if(!isRecAllowed){
[self showPopUpForErrorMessage:#"Speech recognition is either not authorized or available for this device. Please authorize the operation or upgrade to latest iOS. If you have done all this, check your internet connectivity"];
}
else{
[[ATSpeechRecognizer sharedObject] toggleRecording]; /*If speech Recognizer is running, it will turn it off. if it is off, it will set it on*/
/*
If you want to do it mannually, use startAudioEngine method and stopAudioEngine method to explicitly perform those operations instead of toggleRecording
*/
}
}
And that's it. All the further explanation you need is in code comments. Ping me if you need further explanation.
Here is the full code for the same:
import UIKit
import Speech
public class ViewController: UIViewController, SFSpeechRecognizerDelegate {
// MARK: Properties
private let speechRecognizer = SFSpeechRecognizer(locale: Locale(identifier: "en-US"))!
private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
private var recognitionTask: SFSpeechRecognitionTask?
private let audioEngine = AVAudioEngine()
#IBOutlet var textView : UITextView!
#IBOutlet var recordButton : UIButton!
// MARK: UIViewController
public override func viewDidLoad() {
super.viewDidLoad()
// Disable the record buttons until authorization has been granted.
recordButton.isEnabled = false
}
override public func viewDidAppear(_ animated: Bool) {
speechRecognizer.delegate = self
SFSpeechRecognizer.requestAuthorization { authStatus in
/*
The callback may not be called on the main thread. Add an
operation to the main queue to update the record button's state.
*/
OperationQueue.main.addOperation {
switch authStatus {
case .authorized:
self.recordButton.isEnabled = true
case .denied:
self.recordButton.isEnabled = false
self.recordButton.setTitle("User denied access to speech recognition", for: .disabled)
case .restricted:
self.recordButton.isEnabled = false
self.recordButton.setTitle("Speech recognition restricted on this device", for: .disabled)
case .notDetermined:
self.recordButton.isEnabled = false
self.recordButton.setTitle("Speech recognition not yet authorized", for: .disabled)
}
}
}
}
private func startRecording() throws {
// Cancel the previous task if it's running.
if let recognitionTask = recognitionTask {
recognitionTask.cancel()
self.recognitionTask = nil
}
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(AVAudioSessionCategoryRecord)
try audioSession.setMode(AVAudioSessionModeMeasurement)
try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
guard let inputNode = audioEngine.inputNode else { fatalError("Audio engine has no input node") }
guard let recognitionRequest = recognitionRequest else { fatalError("Unable to created a SFSpeechAudioBufferRecognitionRequest object") }
// Configure request so that results are returned before audio recording is finished
recognitionRequest.shouldReportPartialResults = true
// A recognition task represents a speech recognition session.
// We keep a reference to the task so that it can be cancelled.
recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest) { result, error in
var isFinal = false
if let result = result {
self.textView.text = result.bestTranscription.formattedString
isFinal = result.isFinal
}
if error != nil || isFinal {
self.audioEngine.stop()
inputNode.removeTap(onBus: 0)
self.recognitionRequest = nil
self.recognitionTask = nil
self.recordButton.isEnabled = true
self.recordButton.setTitle("Start Recording", for: [])
}
}
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in
self.recognitionRequest?.append(buffer)
}
audioEngine.prepare()
try audioEngine.start()
textView.text = "(Go ahead, I'm listening)"
}
// MARK: SFSpeechRecognizerDelegate
public func speechRecognizer(_ speechRecognizer: SFSpeechRecognizer, availabilityDidChange available: Bool) {
if available {
recordButton.isEnabled = true
recordButton.setTitle("Start Recording", for: [])
} else {
recordButton.isEnabled = false
recordButton.setTitle("Recognition not available", for: .disabled)
}
}
// MARK: Interface Builder actions
#IBAction func recordButtonTapped() {
if audioEngine.isRunning {
audioEngine.stop()
recognitionRequest?.endAudio()
recordButton.isEnabled = false
recordButton.setTitle("Stopping", for: .disabled)
} else {
try! startRecording()
recordButton.setTitle("Stop recording", for: [])
}
}
}

Private iOS Framework Returning NULL

I'm trying to use BatteryCenter and CommonUtilities private frameworks under iOS 9.1 with the help of nst's iOS Runtime Headers. It's for research purposes and won't make it to the AppStore.
Here are their respective codes:
- (void)batteryCenter {
NSBundle *bundle = [NSBundle bundleWithPath:#"/System/Library/PrivateFrameworks/BatteryCenter.framework"];
BOOL success = [bundle load];
if(success) {
Class BCBatteryDevice = NSClassFromString(#"BCBatteryDevice");
id si = [[BCBatteryDevice alloc] init];
NSLog(#"Charging: %#", [si valueForKey:#"charging"]);
}
}
- (void)commonUtilities {
NSBundle *bundle = [NSBundle bundleWithPath:#"/System/Library/PrivateFrameworks/CommonUtilities.framework"];
BOOL success = [bundle load];
if(success) {
Class CommonUtilities = NSClassFromString(#"CUTWiFiManager");
id si = [CommonUtilities valueForKey:#"sharedInstance"];
NSLog(#"Is Wi-Fi Enabled: %#", [si valueForKey:#"isWiFiEnabled"]);
NSLog(#"Wi-Fi Scaled RSSI: %#", [si valueForKey:#"wiFiScaledRSSI"]);
NSLog(#"Wi-Fi Scaled RSSI: %#", [si valueForKey:#"lastWiFiPowerInfo"]);
}
}
Although I get the classes back, all of their respected values are NULL which is weird since some must be true, e.g. I'm connected to Wi-Fi so isWiFiEnabled should be YES.
What exactly is missing that my code doesn't return whats expected? Does it need entitlement(s)? If so what exactly?
In Swift, I managed to get this working without the BatteryCenter headers. I'm still looking for a way to access the list of attached batteries without using BCBatteryDeviceController, but this is what I have working so far:
Swift 3:
guard case let batteryCenterHandle = dlopen("/System/Library/PrivateFrameworks/BatteryCenter.framework/BatteryCenter", RTLD_LAZY), batteryCenterHandle != nil else {
fatalError("BatteryCenter not found")
}
guard let batteryDeviceControllerClass = NSClassFromString("BCBatteryDeviceController") as? NSObjectProtocol else {
fatalError("BCBatteryDeviceController not found")
}
let instance = batteryDeviceControllerClass.perform(Selector(("sharedInstance"))).takeUnretainedValue()
if let devices = instance.value(forKey: "connectedDevices") as? [AnyObject] {
// You will have more than one battery in connectedDevices if your device is using a Smart Case
for battery in devices {
print(battery)
}
}
Swift 2.2:
guard case let batteryCenterHandle = dlopen("/System/Library/PrivateFrameworks/BatteryCenter.framework/BatteryCenter", RTLD_LAZY) where batteryCenterHandle != nil else {
fatalError("BatteryCenter not found")
}
guard let c = NSClassFromString("BCBatteryDeviceController") as? NSObjectProtocol else {
fatalError("BCBatteryDeviceController not found")
}
let instance = c.performSelector("sharedInstance").takeUnretainedValue()
if let devices = instance.valueForKey("connectedDevices") as? [AnyObject] {
// You will have more than one battery in connectedDevices if your device is using a Smart Case
for battery in devices {
print(battery)
}
}
This logs:
<BCBatteryDevice: 0x15764a3d0; vendor = Apple; productIdentifier = 0; parts = (null); matchIdentifier = (null); baseIdentifier = InternalBattery-0; name = iPhone; percentCharge = 63; lowBattery = NO; connected = YES; charging = YES; internal = YES; powerSource = YES; poweredSoureState = AC Power; transportType = 1 >
You need to first access the BCBatteryDeviceController, after success block is executed, through which you can get list of all the connected devices.
Here is the code for the same.
Class CommonUtilities = NSClassFromString(#"BCBatteryDeviceController");
id si = [CommonUtilities valueForKey:#"sharedInstance"];
BCBatteryDeviceController* objBCBatteryDeviceController = si;
NSLog(#"Connected devices: %#", objBCBatteryDeviceController.connectedDevices);

Resources