DJI SDK waypoint Missions - swift, IOS - ios

I want to program a system where co-ordinates can be passed to the drone as waypoints and the drone will carry out the actions. The DJI API is documented with OBJ-c and while the concepts will be the same im struggling to understand how a mission is programmed.
If someone can help me with the basic structure of a waypoint mission and passing this to the drone it would be very helpful. Maybe I'm not understanding things well but the DJI API doesn't seem to be very descriptive of how things work.
I'm not asking to be spoon fed but someone with insight who could give me an explanation

This is how I wrote the waypoint function with swift and you can read code comments from below to understand the mission specification!
/// Build a waypoint function that allows the drone to move between points according to its altitude, longitude, and latitude
func waypointMission() -> DJIWaypointMission? {
/// Define a new object class for the waypoint mission
let mission = DJIMutableWaypointMission()
mission.maxFlightSpeed = 15
mission.autoFlightSpeed = 8
mission.finishedAction = .noAction
mission.headingMode = .usingInitialDirection
mission.flightPathMode = .normal
mission.rotateGimbalPitch = false /// Change this to True if you want the camera gimbal pitch to move between waypoints
mission.exitMissionOnRCSignalLost = true
mission.gotoFirstWaypointMode = .pointToPoint
mission.repeatTimes = 1
/// Keep listening to the drone location included in latitude and longitude
guard let droneLocationKey = DJIFlightControllerKey(param: DJIFlightControllerParamAircraftLocation) else {
return nil
}
guard let droneLocationValue = DJISDKManager.keyManager()?.getValueFor(droneLocationKey) else {
return nil
}
let droneLocation = droneLocationValue.value as! CLLocation
let droneCoordinates = droneLocation.coordinate
/// Check if the returned coordinate value is valid or not
if !CLLocationCoordinate2DIsValid(droneCoordinates) {
return nil
}
mission.pointOfInterest = droneCoordinates
let loc1 = CLLocationCoordinate2DMake(droneCoordinates.latitude, droneCoordinates.longitude)
let waypoint1 = DJIWaypoint(coordinate: loc1)
waypoint1.altitude = 2.0 /// The altitude which the drone flies to as the first point and should be of type float
waypoint1.heading = 0 /// This is between [-180, 180] degrees, where the drone moves when reaching a waypoint. 0 means don't change the drone's heading
waypoint1.actionRepeatTimes = 1 /// Repeat this mission just for one time
waypoint1.actionTimeoutInSeconds = 60
// waypoint1.cornerRadiusInMeters = 5
waypoint1.turnMode = .clockwise /// When the drones changing its heading. It moves clockwise
waypoint1.gimbalPitch = 0 /// This is between [-90, 0] degrees, if you want to change this value, then change rotateGimbalPitch to True. The drone gimbal will move by the value when the drone reaches its waypoint
waypoint1.speed = 0.5 /// Note that this value does not make the drone move with speed 0.5 m/s because this is an error from the firmware and can't be fixed. However, we need to trigger it to be able to change the next one
let loc2 = CLLocationCoordinate2DMake(droneCoordinates.latitude, droneCoordinates.longitude)
let waypoint2 = DJIWaypoint(coordinate: loc2)
waypoint2.altitude = 15.0 /// should be of type float
waypoint2.heading = 0
waypoint2.actionRepeatTimes = 1
waypoint2.actionTimeoutInSeconds = 60
//waypoint2.cornerRadiusInMeters = 5
waypoint2.turnMode = .clockwise
waypoint2.gimbalPitch = 0
/// Chnage the velocity of the drone while moving to this waypoint
waypoint2.speed = 0.5
mission.add(waypoint1)
mission.add(waypoint2)
return DJIWaypointMission(mission: mission)
}
Now when you call the previous function, you need to pass it to the timeline of the mission, and here is an example flow of a full timeline mission
/// Set up the drone misison and strart its timeline
func goUpVerticallyMission() {
/// Check if the drone is connected
let product = DJISDKManager.product()
if (product?.model) != nil {
/// This is the array that holds all the timline elements to be executed later in order
var elements = [DJIMissionControlTimelineElement]()
/// Reset Gimbal Position
let attitude = DJIGimbalAttitude(pitch: 0.0, roll: 0.0, yaw: 0.0)
let pitchAction: DJIGimbalAttitudeAction = DJIGimbalAttitudeAction(attitude: attitude)!
elements.append(pitchAction) // task number 0
let takeOff = DJITakeOffAction()
elements.append(takeOff) // task number 1
/// Set up and start a new waypoint mission
var mission: DJIWaypointMission?
guard let result = self.waypointMission() else {return}
mission = result
elements.append(mission!) // task number 2
/// After the waypoint mission finishes stop recording the video
let stopVideoAction = DJIRecordVideoAction(stopRecordVideo: ())
elements.append(stopVideoAction!) // task number 3
/// This is landing action after finishing the task
// let landAction = DJILandAction()
// elements.append(landAction)
/// This is the go home and landing action in which the drone goes back to its starting point when the first time started the mission
let goHomeLandingAction = DJIGoHomeAction()
elements.append(goHomeLandingAction) /// task number 4
/// Check if there is any error while appending the timeline mission to the array.
let error = DJISDKManager.missionControl()?.scheduleElements(elements)
if error != nil {
print("Error detected with the mission")
} else {
/// If there is no error then start the mission
DispatchQueue.main.asyncAfter(deadline: .now()) {
DJISDKManager.missionControl()?.startTimeline()
}
}
}
}
Please let me know if you misunderstand anything or you need any help with other things!!!

Look at the dji example app on github:
https://github.com/dji-sdk/Mobile-SDK-iOS/tree/master/Sample%20Code/ObjcSampleCode/DJISdkDemo/Demo/MissionManager
//
// WaypointMissionViewController.m
// DJISdkDemo
//
// Copyright © 2015 DJI. All rights reserved.
//
/**
* This file demonstrates the process to start a waypoint mission. In this demo,
* the aircraft will go to four waypoints, shoot photos and record videos.
* The flight speed can be controlled by calling the class method
* setAutoFlightSpeed:withCompletion:. In this demo, when the aircraft will
* change the speed right after it reaches the second point (point with index 1).
*
* CAUTION: it is highly recommended to run this sample using the simulator.
*/
#import <DJISDK/DJISDK.h>
#import "DemoUtility.h"
#import "WaypointMissionViewController.h"
#define ONE_METER_OFFSET (0.00000901315)
#interface WaypointMissionViewController ()
#property (nonatomic) DJIWaypointMissionOperator *wpOperator;
#property (nonatomic) DJIWaypointMission *downloadMission;
#end
#implementation WaypointMissionViewController
-(void)viewWillAppear:(BOOL)animated {
[super viewWillAppear:animated];
self.wpOperator = [[DJISDKManager missionControl] waypointMissionOperator];
}
-(void)viewWillDisappear:(BOOL)animated {
[super viewWillDisappear:animated];
[self.wpOperator removeListenerOfExecutionEvents:self];
}
/**
* Because waypoint mission is refactored and uses a different interface design
* from the other missions, we need to override the UI actions.
*/
-(void)onPrepareButtonClicked:(id)sender {
DJIWaypointMission *wp = (DJIWaypointMission *)[self initializeMission];
NSError *error = [self.wpOperator loadMission:wp];
if (error) {
ShowResult(#"Prepare Mission Failed:%#", error);
return;
}
WeakRef(target);
[self.wpOperator addListenerToUploadEvent:self withQueue:nil andBlock:^(DJIWaypointMissionUploadEvent * _Nonnull event) {
WeakReturn(target);
[target onUploadEvent:event];
}];
[self.wpOperator uploadMissionWithCompletion:^(NSError * _Nullable error) {
WeakReturn(target);
if (error) {
ShowResult(#"ERROR: uploadMission:withCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: uploadMission:withCompletion:.");
}
}];
}
- (IBAction)onStartButtonClicked:(id)sender {
WeakRef(target);
[self.wpOperator addListenerToExecutionEvent:self withQueue:nil andBlock:^(DJIWaypointMissionExecutionEvent * _Nonnull event) {
[target showWaypointMissionProgress:event];
}];
[self.wpOperator startMissionWithCompletion:^(NSError * _Nullable error) {
if (error) {
ShowResult(#"ERROR: startMissionWithCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: startMissionWithCompletion:. ");
}
[self missionDidStart:error];
}];
}
-(void)onStopButtonClicked:(id)sender {
WeakRef(target);
[self.wpOperator stopMissionWithCompletion:^(NSError * _Nullable error) {
WeakReturn(target);
if (error) {
ShowResult(#"ERROR: stopMissionExecutionWithCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: stopMissionExecutionWithCompletion:. ");
}
[target missionDidStop:error];
}];
}
-(void)onDownloadButtonClicked:(id)sender {
self.downloadMission = nil;
WeakRef(target);
[self.wpOperator addListenerToDownloadEvent:self
withQueue:nil
andBlock:^(DJIWaypointMissionDownloadEvent * _Nonnull event)
{
if (event.progress.downloadedWaypointIndex == event.progress.totalWaypointCount) {
ShowResult(#"SUCCESS: the waypoint mission is downloaded. ");
target.downloadMission = target.wpOperator.loadedMission;
[target.wpOperator removeListenerOfDownloadEvents:target];
[target.progressBar setHidden:YES];
[target mission:target.downloadMission didDownload:event.error];
}
else if (event.error) {
ShowResult(#"Download Mission Failed:%#", event.error);
[target.progressBar setHidden:YES];
[target mission:target.downloadMission didDownload:event.error];
[target.wpOperator removeListenerOfDownloadEvents:target];
} else {
[target.progressBar setHidden:NO];
float progress = ((float)event.progress.downloadedWaypointIndex + 1) / (float)event.progress.totalWaypointCount;
NSLog(#"Download Progress:%d%%", (int)(progress*100));
[target.progressBar setProgress:progress];
}
}];
[self.wpOperator downloadMissionWithCompletion:^(NSError * _Nullable error) {
WeakReturn(target);
if (error) {
ShowResult(#"ERROR: downloadMissionWithCompletion:withCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: downloadMissionWithCompletion:withCompletion:.");
}
}];
}
-(void)onPauseButtonClicked:(id)sender {
[self missionWillPause];
[self.wpOperator pauseMissionWithCompletion:^(NSError * _Nullable error) {
if (error) {
ShowResult(#"ERROR: pauseMissionWithCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: pauseMissionWithCompletion:. ");
}
}];
}
-(void)onResumeButtonClicked:(id)sender {
WeakRef(target);
[self.wpOperator resumeMissionWithCompletion:^(NSError * _Nullable error) {
WeakReturn(target);
if (error) {
ShowResult(#"ERROR: resumeMissionWithCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: resumeMissionWithCompletion:. ");
}
[target missionDidResume:error];
}];
}
/**
* Prepare the waypoint mission. The basic workflow is:
* 1. Create an instance of DJIWaypointMission.
* 2. Create coordinates.
* 3. Use the coordinate to create an instance of DJIWaypoint.
* 4. Add actions for each waypoint.
* 5. Add the waypoints into the mission.
*/
-(DJIMission*) initializeMission {
// Step 1: create mission
DJIMutableWaypointMission* mission = [[DJIMutableWaypointMission alloc] init];
mission.maxFlightSpeed = 15.0;
mission.autoFlightSpeed = 4.0;
// Step 2: prepare coordinates
CLLocationCoordinate2D northPoint;
CLLocationCoordinate2D eastPoint;
CLLocationCoordinate2D southPoint;
CLLocationCoordinate2D westPoint;
northPoint = CLLocationCoordinate2DMake(self.homeLocation.latitude + 10 * ONE_METER_OFFSET, self.homeLocation.longitude);
eastPoint = CLLocationCoordinate2DMake(self.homeLocation.latitude, self.homeLocation.longitude + 10 * ONE_METER_OFFSET);
southPoint = CLLocationCoordinate2DMake(self.homeLocation.latitude - 10 * ONE_METER_OFFSET, self.homeLocation.longitude);
westPoint = CLLocationCoordinate2DMake(self.homeLocation.latitude, self.homeLocation.longitude - 10 * ONE_METER_OFFSET);
// Step 3: create waypoints
DJIWaypoint* northWP = [[DJIWaypoint alloc] initWithCoordinate:northPoint];
northWP.altitude = 10.0;
DJIWaypoint* eastWP = [[DJIWaypoint alloc] initWithCoordinate:eastPoint];
eastWP.altitude = 20.0;
DJIWaypoint* southWP = [[DJIWaypoint alloc] initWithCoordinate:southPoint];
southWP.altitude = 30.0;
DJIWaypoint* westWP = [[DJIWaypoint alloc] initWithCoordinate:westPoint];
westWP.altitude = 40.0;
// Step 4: add actions
[northWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeRotateGimbalPitch param:-60]];
[northWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeShootPhoto param:0]];
[eastWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeShootPhoto param:0]];
[southWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeRotateAircraft param:60]];
[southWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeStartRecord param:0]];
[westWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeStopRecord param:0]];
// Step 5: add waypoints into the mission
[mission addWaypoint:northWP];
[mission addWaypoint:eastWP];
[mission addWaypoint:southWP];
[mission addWaypoint:westWP];
return mission;
}
- (void)onUploadEvent:(DJIWaypointMissionUploadEvent *) event
{
if (event.currentState == DJIWaypointMissionStateReadyToExecute) {
ShowResult(#"SUCCESS: the whole waypoint mission is uploaded.");
[self.progressBar setHidden:YES];
[self.wpOperator removeListenerOfUploadEvents:self];
}
else if (event.error) {
ShowResult(#"ERROR: waypoint mission uploading failed. %#", event.error.description);
[self.progressBar setHidden:YES];
[self.wpOperator removeListenerOfUploadEvents:self];
}
else if (event.currentState == DJIWaypointMissionStateReadyToUpload ||
event.currentState == DJIWaypointMissionStateNotSupported ||
event.currentState == DJIWaypointMissionStateDisconnected) {
ShowResult(#"ERROR: waypoint mission uploading failed. %#", event.error.description);
[self.progressBar setHidden:YES];
[self.wpOperator removeListenerOfUploadEvents:self];
} else if (event.currentState == DJIWaypointMissionStateUploading) {
[self.progressBar setHidden:NO];
DJIWaypointUploadProgress *progress = event.progress;
float progressInPercent = progress.uploadedWaypointIndex / progress.totalWaypointCount;
[self.progressBar setProgress:progressInPercent];
}
}
-(void) showWaypointMissionProgress:(DJIWaypointMissionExecutionEvent *)event {
NSMutableString* statusStr = [NSMutableString new];
[statusStr appendFormat:#"previousState:%#\n", [[self class] descriptionForMissionState:event.previousState]];
[statusStr appendFormat:#"currentState:%#\n", [[self class] descriptionForMissionState:event.currentState]];
[statusStr appendFormat:#"Target Waypoint Index: %zd\n", (long)event.progress.targetWaypointIndex];
[statusStr appendString:[NSString stringWithFormat:#"Is Waypoint Reached: %#\n",
event.progress.isWaypointReached ? #"YES" : #"NO"]];
[statusStr appendString:[NSString stringWithFormat:#"Execute State: %#\n", [[self class] descriptionForExecuteState:event.progress.execState]]];
if (event.error) {
[statusStr appendString:[NSString stringWithFormat:#"Execute Error: %#", event.error.description]];
[self.wpOperator removeListenerOfExecutionEvents:self];
}
[self.statusLabel setText:statusStr];
}
/**
* Display the information of the mission if it is downloaded successfully.
*/
-(void)mission:(DJIMission *)mission didDownload:(NSError *)error {
if (error) return;
if ([mission isKindOfClass:[DJIWaypointMission class]]) {
// Display information of waypoint mission.
[self showWaypointMission:(DJIWaypointMission*)mission];
}
}
-(void) showWaypointMission:(DJIWaypointMission*)wpMission {
NSMutableString* missionInfo = [NSMutableString stringWithString:#"The waypoint mission is downloaded successfully: \n"];
[missionInfo appendString:[NSString stringWithFormat:#"RepeatTimes: %zd\n", wpMission.repeatTimes]];
[missionInfo appendString:[NSString stringWithFormat:#"HeadingMode: %u\n", (unsigned int)wpMission.headingMode]];
[missionInfo appendString:[NSString stringWithFormat:#"FinishedAction: %u\n", (unsigned int)wpMission.finishedAction]];
[missionInfo appendString:[NSString stringWithFormat:#"FlightPathMode: %u\n", (unsigned int)wpMission.flightPathMode]];
[missionInfo appendString:[NSString stringWithFormat:#"MaxFlightSpeed: %f\n", wpMission.maxFlightSpeed]];
[missionInfo appendString:[NSString stringWithFormat:#"AutoFlightSpeed: %f\n", wpMission.autoFlightSpeed]];
[missionInfo appendString:[NSString stringWithFormat:#"There are %zd waypoint(s). ", wpMission.waypointCount]];
[self.statusLabel setText:missionInfo];
}
+(NSString *)descriptionForMissionState:(DJIWaypointMissionState)state {
switch (state) {
case DJIWaypointMissionStateUnknown:
return #"Unknown";
case DJIWaypointMissionStateExecuting:
return #"Executing";
case DJIWaypointMissionStateUploading:
return #"Uploading";
case DJIWaypointMissionStateRecovering:
return #"Recovering";
case DJIWaypointMissionStateDisconnected:
return #"Disconnected";
case DJIWaypointMissionStateNotSupported:
return #"NotSupported";
case DJIWaypointMissionStateReadyToUpload:
return #"ReadyToUpload";
case DJIWaypointMissionStateReadyToExecute:
return #"ReadyToExecute";
case DJIWaypointMissionStateExecutionPaused:
return #"ExecutionPaused";
}
return #"Unknown";
}
+(NSString *)descriptionForExecuteState:(DJIWaypointMissionExecuteState)state {
switch (state) {
case DJIWaypointMissionExecuteStateInitializing:
return #"Initializing";
break;
case DJIWaypointMissionExecuteStateMoving:
return #"Moving";
case DJIWaypointMissionExecuteStatePaused:
return #"Paused";
case DJIWaypointMissionExecuteStateBeginAction:
return #"BeginAction";
case DJIWaypointMissionExecuteStateDoingAction:
return #"Doing Action";
case DJIWaypointMissionExecuteStateFinishedAction:
return #"Finished Action";
case DJIWaypointMissionExecuteStateCurveModeMoving:
return #"CurveModeMoving";
case DJIWaypointMissionExecuteStateCurveModeTurning:
return #"CurveModeTurning";
case DJIWaypointMissionExecuteStateReturnToFirstWaypoint:
return #"Return To first Point";
default:
break;
}
return #"Unknown";
}
#end

Related

Google unified messaging platform SDK implementation

I am wondering how I can make the form show in my iOS App https://developers.google.com/admob/ump/ios/quick-start.
I am using Xcode vs 11.6 and exported the game from Godot game engine.
I have changed the GADApplicationIdentifier in my info.plist and I don't get any errors in my code it just doesn't show the form when I run the game. I live in Europe. Any help is appreciated.
This is my current code:
#include <UserMessagingPlatform/UserMessagingPlatform.h>
#import "ViewController1.h"
#include <UserMessagingPlatform/UserMessagingPlatform.h>
#interface ViewController1 ()
#end
#implementation ViewController1
- (void)start {
// Create a UMPRequestParameters object.
UMPRequestParameters *parameters = [[UMPRequestParameters alloc] init];
// Set tag for under age of consent. Here #NO means users are not under age.
parameters.tagForUnderAgeOfConsent = #NO;
// Request an update to the consent information.
[UMPConsentInformation.sharedInstance
requestConsentInfoUpdateWithParameters:parameters
completionHandler:^(NSError *_Nullable error) {
if (error) {
// Handle the error.
} else {
// The consent information state was updated.
// You are now ready to check if a form is
// available.
UMPFormStatus formStatus =
UMPConsentInformation.sharedInstance
.formStatus;
if (formStatus == UMPFormStatusAvailable) {
[self loadForm];
}
}
}];
}
- (void)viewDidLoad {
// Create a UMPRequestParameters object.
UMPRequestParameters *parameters = [[UMPRequestParameters alloc] init];
// Set tag for under age of consent. Here #NO means users are not under age.
parameters.tagForUnderAgeOfConsent = #NO;
// Request an update to the consent information.
[UMPConsentInformation.sharedInstance
requestConsentInfoUpdateWithParameters:parameters
completionHandler:^(NSError *_Nullable error) {
if (error) {
// Handle the error.
} else {
// The consent information state was updated.
// You are now ready to check if a form is
// available.
UMPFormStatus formStatus =
UMPConsentInformation.sharedInstance
.formStatus;
if (formStatus == UMPFormStatusAvailable) {
[self loadForm];
}
}
}];
[super viewDidLoad];
}
- (void)loadForm {
[UMPConsentForm loadWithCompletionHandler:^(UMPConsentForm *form,
NSError *loadError) {
if (loadError) {
// Handle the error.
} else {
// Present the form. You can also hold on to the reference to present
// later.
if (UMPConsentInformation.sharedInstance.consentStatus ==
UMPConsentStatusRequired) {
[form
presentFromViewController:self
completionHandler:^(NSError *_Nullable dismissError) {
if (UMPConsentInformation.sharedInstance.consentStatus ==
UMPConsentStatusObtained) {
// App can start requesting ads.
}
}];
} else {
// Keep the form available for changes to user consent.
}
}
}];
}
#end
EDIT
Add a few NSLog to see what gets called.
try the following - just a stone into the bush, maybe it hits something ...
#include <UserMessagingPlatform/UserMessagingPlatform.h>
#import "ViewController1.h"
#include <UserMessagingPlatform/UserMessagingPlatform.h>
#interface ViewController1 ()
#end
#implementation ViewController1
- (void)start {
// Create a UMPRequestParameters object.
UMPRequestParameters *parameters = [[UMPRequestParameters alloc] init];
// Set tag for under age of consent. Here #NO means users are not under age.
parameters.tagForUnderAgeOfConsent = #NO;
// Request an update to the consent information.
[UMPConsentInformation.sharedInstance
requestConsentInfoUpdateWithParameters:parameters
completionHandler:^(NSError *_Nullable error) {
if (error) {
// Handle the error.
} else {
// The consent information state was updated.
// You are now ready to check if a form is
// available.
UMPFormStatus formStatus =
UMPConsentInformation.sharedInstance
.formStatus;
if (formStatus == UMPFormStatusAvailable) {
[self loadForm];
}
}
}];
}
// Change this one
- (void)viewDidLoad {
[super viewDidLoad];
}
// Add this one
- ( void ) viewDidAppear:( BOOL ) animated {
[super viewDidAppear:animated];
// View controller is now visible and on screen, so request permission
self.addMobStuff;
}
// Change / add this one
- ( void ) addMobStuff {
NSLog( #"addMobStuff" );
// Create a UMPRequestParameters object.
UMPRequestParameters *parameters = [[UMPRequestParameters alloc] init];
// Set tag for under age of consent. Here #NO means users are not under age.
parameters.tagForUnderAgeOfConsent = #NO;
// Request an update to the consent information.
if ( ! UMPConsentInformation.sharedInstance )
{
NSLog(#"No shared instance");
}
[UMPConsentInformation.sharedInstance
requestConsentInfoUpdateWithParameters:parameters
completionHandler:^(NSError *_Nullable error) {
if (error) {
// Handle the error.
NSLog(#"Some error %#", error);
} else {
NSLog(#"Proceed to form ...");
// The consent information state was updated.
// You are now ready to check if a form is
// available.
UMPFormStatus formStatus =
UMPConsentInformation.sharedInstance
.formStatus;
if (formStatus == UMPFormStatusAvailable) {
NSLog(#"Loading form ...");
[self loadForm];
}
else {
NSLog(#"Form status is not available");
}
}
}];
}
- (void)loadForm {
[UMPConsentForm loadWithCompletionHandler:^(UMPConsentForm *form,
NSError *loadError) {
if (loadError) {
// Handle the error.
NSLog(#"Form error %#", error);
} else {
// Present the form. You can also hold on to the reference to present
// later.
if (UMPConsentInformation.sharedInstance.consentStatus ==
UMPConsentStatusRequired) {
NSLog(#"Presenting form");
[form
presentFromViewController:self
completionHandler:^(NSError *_Nullable dismissError) {
if (UMPConsentInformation.sharedInstance.consentStatus ==
UMPConsentStatusObtained) {
// App can start requesting ads.
}
}];
} else {
// Keep the form available for changes to user consent.
NSLog(#"Changes");
}
}
}];
}
#end
note if this works it probably needs a bit more polish before it can be released into the world but hopefully at least the request form will be displayed ...

iOS Speech Recognition crash on installTapOnBus - [AVAudioIONodeImpl.mm:911:SetOutputFormat: (format.sampleRate == hwFormat.sampleRate)]

Most times I launch the app, the test-to-speech audio and speech recognition work perfectly. But sometimes I launch it and it crashes when first starting speech recognition. It seems to get on a roll and crash several launches in a row, so it is a little bit consistent, once it gets into one of its 'moods' :)
The recognition starts after a TTS introduction and at the same time as TTS says 'listening' - therefore both are active at once. Possibly the audio takes a few milliseconds to change over and gets crashy, but I am not clear how this works or how to prevent it.
I see the following error:
[avae] AVAEInternal.h:70:_AVAE_Check: required condition is false:
[AVAudioIONodeImpl.mm:911:SetOutputFormat: (format.sampleRate == hwFormat.sampleRate)]
*** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio',
reason: 'required condition is false: format.sampleRate == hwFormat.sampleRate'
I have put some try-catches in, just to see if it prevents this error and it doesn't. I also added a tiny sleep which also made no difference. So I am not even clear which code is causing it. If I put a breakpoint before the removeTapOnBus code, it does not crash until this line executes. If I put the breakpoint on the installTapOnBus line it does not crash until that line. And if I put the breakpoint after the code, it crashes. So it does seem to be this code.
In any case, what am I doing wrong or how could I debug it?
- (void) recordAndRecognizeWithLang:(NSString *) lang
{
NSLocale *locale = [[NSLocale alloc] initWithLocaleIdentifier:lang];
self.sfSpeechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:locale];
if (!self.sfSpeechRecognizer) {
[self sendErrorWithMessage:#"The language is not supported" andCode:7];
} else {
// Cancel the previous task if it's running.
if ( self.recognitionTask ) {
[self.recognitionTask cancel];
self.recognitionTask = nil;
}
//[self initAudioSession];
self.recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
self.recognitionRequest.shouldReportPartialResults = [[self.command argumentAtIndex:1] boolValue];
// https://developer.apple.com/documentation/speech/sfspeechrecognizerdelegate
// only callback is availabilityDidChange
self.sfSpeechRecognizer.delegate = self;
self.recognitionTask = [self.sfSpeechRecognizer recognitionTaskWithRequest:self.recognitionRequest resultHandler:^(SFSpeechRecognitionResult *result, NSError *error) {
NSLog(#"recognise");
if (error) {
NSLog(#"error %ld", error.code);
// code 1 or 203 or 216 = we called abort via self.recognitionTask cancel
// 1101 is thrown when in simulator
// 1700 is when not given permission
if (error.code==203){ //|| error.code==216
// nothing, carry on, this is bullshit, or maybe not...
[self sendErrorWithMessage:#"sfSpeechRecognizer Error" andCode:error.code];
}else{
[self stopAndRelease];
// note: we can't send error back to js as I found it crashes (when recognising, then switch apps, then come back)
[self sendErrorWithMessage:#"sfSpeechRecognizer Error" andCode:error.code];
return;
}
}
if (result) {
NSMutableArray * alternatives = [[NSMutableArray alloc] init];
int maxAlternatives = [[self.command argumentAtIndex:2] intValue];
for ( SFTranscription *transcription in result.transcriptions ) {
if (alternatives.count < maxAlternatives) {
float confMed = 0;
for ( SFTranscriptionSegment *transcriptionSegment in transcription.segments ) {
//NSLog(#"transcriptionSegment.confidence %f", transcriptionSegment.confidence);
if (transcriptionSegment.confidence){
confMed +=transcriptionSegment.confidence;
}
}
NSLog(#"transcriptionSegment.transcript %#", transcription.formattedString);
NSMutableDictionary * resultDict = [[NSMutableDictionary alloc]init];
[resultDict setValue:transcription.formattedString forKey:#"transcript"];
[resultDict setValue:[NSNumber numberWithBool:result.isFinal] forKey:#"final"];
float conf = 0;
if (confMed && transcription.segments && transcription.segments.count && transcription.segments.count>0){
conf = confMed/transcription.segments.count;
}
[resultDict setValue:[NSNumber numberWithFloat:conf]forKey:#"confidence"];
[alternatives addObject:resultDict];
}
}
[self sendResults:#[alternatives]];
if ( result.isFinal ) {
//NSLog(#"recog: isFinal");
[self stopAndRelease];
}
}
}];
//[self.audioEngine.inputNode disconnectNodeInput:0];
AVAudioFormat *recordingFormat = [self.audioEngine.inputNode outputFormatForBus:0];
//AVAudioFormat *recordingFormat = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:44100 channels:1];
NSLog(#"samplerate=%f", recordingFormat.sampleRate);
NSLog(#"channelCount=%i", recordingFormat.channelCount);
// tried this but does not prevent crashing
//if (recordingFormat.sampleRate <= 0) {
// [self.audioEngine.inputNode reset];
// recordingFormat = [[self.audioEngine inputNode] outputFormatForBus:0];
//}
sleep(1); // to prevent random crashes
#try {
[self.audioEngine.inputNode removeTapOnBus:0];
} #catch (NSException *exception) {
NSLog(#"removeTapOnBus exception");
}
sleep(1); // to prevent random crashes
#try {
NSLog(#"install tap on bus");
[self.audioEngine.inputNode installTapOnBus:0 bufferSize:1024 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
//NSLog(#"tap");
[self.recognitionRequest appendAudioPCMBuffer:buffer];
}];
} #catch (NSException *exception) {
NSLog(#"installTapOnBus exception");
}
sleep(1); // to prevent random crashes
[self.audioEngine prepare];
NSError* error = nil;
BOOL isOK = [self.audioEngine startAndReturnError:&error];
if (!isOK){
NSLog(#"audioEngine startAndReturnError returned false");
}
if (error){
NSLog(#"audioEngine startAndReturnError error");
}
}

iOS SWIFT - WebRTC change from Front Camera to back Camera

WebRTC video by default uses Front Camera, which works fine. However, i need to switch it to back camera, and i have not been able to find any code to do that.
Which part do i need to edit?
Is it the localView or localVideoTrack or capturer?
Swift 3.0
Peer connection can have only one 'RTCVideoTrack' for sending video stream.
At first, for change camera front/back you must remove current video track on peer connection.
After then, you create new 'RTCVideoTrack' on camera which you need, and set this for peer connection.
I used this methods.
func swapCameraToFront() {
let localStream: RTCMediaStream? = peerConnection?.localStreams.first as? RTCMediaStream
localStream?.removeVideoTrack(localStream?.videoTracks.first as! RTCVideoTrack)
let localVideoTrack: RTCVideoTrack? = createLocalVideoTrack()
if localVideoTrack != nil {
localStream?.addVideoTrack(localVideoTrack)
delegate?.appClient(self, didReceiveLocalVideoTrack: localVideoTrack!)
}
peerConnection?.remove(localStream)
peerConnection?.add(localStream)
}
func swapCameraToBack() {
let localStream: RTCMediaStream? = peerConnection?.localStreams.first as? RTCMediaStream
localStream?.removeVideoTrack(localStream?.videoTracks.first as! RTCVideoTrack)
let localVideoTrack: RTCVideoTrack? = createLocalVideoTrackBackCamera()
if localVideoTrack != nil {
localStream?.addVideoTrack(localVideoTrack)
delegate?.appClient(self, didReceiveLocalVideoTrack: localVideoTrack!)
}
peerConnection?.remove(localStream)
peerConnection?.add(localStream)
}
As of now I only have the answer in Objective C language in regard to Ankit's comment below. I will convert it into Swift after some time.
You can check the below code
- (RTCVideoTrack *)createLocalVideoTrack {
RTCVideoTrack *localVideoTrack = nil;
NSString *cameraID = nil;
for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if (captureDevice.position == AVCaptureDevicePositionFront) {
cameraID = [captureDevice localizedName]; break;
}
}
RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:cameraID];
RTCMediaConstraints *mediaConstraints = [self defaultMediaStreamConstraints];
RTCVideoSource *videoSource = [_factory videoSourceWithCapturer:capturer constraints:mediaConstraints];
localVideoTrack = [_factory videoTrackWithID:#"ARDAMSv0" source:videoSource];
return localVideoTrack;
}
- (RTCVideoTrack *)createLocalVideoTrackBackCamera {
RTCVideoTrack *localVideoTrack = nil;
//AVCaptureDevicePositionFront
NSString *cameraID = nil;
for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if (captureDevice.position == AVCaptureDevicePositionBack) {
cameraID = [captureDevice localizedName];
break;
}
}
RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:cameraID];
RTCMediaConstraints *mediaConstraints = [self defaultMediaStreamConstraints];
RTCVideoSource *videoSource = [_factory videoSourceWithCapturer:capturer constraints:mediaConstraints];
localVideoTrack = [_factory videoTrackWithID:#"ARDAMSv0" source:videoSource];
return localVideoTrack;
}
If you decide to use official Google build here the explanation:
First, you must configure your camera before call start, best place to do that in ARDVideoCallViewDelegate in method didCreateLocalCapturer
- (void)startCapture:(void (^)(BOOL succeeded))completionHandler {
AVCaptureDevicePosition position = _usingFrontCamera ? AVCaptureDevicePositionFront : AVCaptureDevicePositionBack;
__weak AVCaptureDevice *device = [self findDeviceForPosition:position];
if ([device lockForConfiguration:nil]) {
if ([device isFocusPointOfInterestSupported]) {
[device setFocusModeLockedWithLensPosition:0.9 completionHandler: nil];
}
}
AVCaptureDeviceFormat *format = [self selectFormatForDevice:device];
if (format == nil) {
RTCLogError(#"No valid formats for device %#", device);
NSAssert(NO, #"");
return;
}
NSInteger fps = [self selectFpsForFormat:format];
[_capturer startCaptureWithDevice: device
format: format
fps:fps completionHandler:^(NSError * error) {
NSLog(#"%#",error);
if (error == nil) {
completionHandler(true);
}
}];
}
Don't forget enabling capture device is asynchronous, sometime better to use completion to be sure everything done as expected.
I am not sure which chrome version you are using for webrtc but with v54 and above there is "bool" property called "useBackCamera" in RTCAVFoundationVideoSource class. You can make use of this property to switch between front/back camera.
Swift 4.0 & 'GoogleWebRTC' : '1.1.20913'
RTCAVFoundationVideoSource class has a property named useBackCamera that can be used for switching the camera used.
#interface RTCAVFoundationVideoSource : RTCVideoSource
- (instancetype)init NS_UNAVAILABLE;
/**
* Calling this function will cause frames to be scaled down to the
* requested resolution. Also, frames will be cropped to match the
* requested aspect ratio, and frames will be dropped to match the
* requested fps. The requested aspect ratio is orientation agnostic and
* will be adjusted to maintain the input orientation, so it doesn't
* matter if e.g. 1280x720 or 720x1280 is requested.
*/
- (void)adaptOutputFormatToWidth:(int)width height:(int)height fps:(int)fps;
/** Returns whether rear-facing camera is available for use. */
#property(nonatomic, readonly) BOOL canUseBackCamera;
/** Switches the camera being used (either front or back). */
#property(nonatomic, assign) BOOL useBackCamera;
/** Returns the active capture session. */
#property(nonatomic, readonly) AVCaptureSession *captureSession;
Below is the implementation for switching camera.
var useBackCamera: Bool = false
func switchCamera() {
useBackCamera = !useBackCamera
self.switchCamera(useBackCamera: useBackCamera)
}
private func switchCamera(useBackCamera: Bool) -> Void {
let localStream = peerConnection?.localStreams.first
if let videoTrack = localStream?.videoTracks.first {
localStream?.removeVideoTrack(videoTrack)
}
let localVideoTrack = createLocalVideoTrack(useBackCamera: useBackCamera)
localStream?.addVideoTrack(localVideoTrack)
self.delegate?.webRTCClientDidAddLocal(videoTrack: localVideoTrack)
if let ls = localStream {
peerConnection?.remove(ls)
peerConnection?.add(ls)
}
}
func createLocalVideoTrack(useBackCamera: Bool) -> RTCVideoTrack {
let videoSource = self.factory.avFoundationVideoSource(with: self.constraints)
videoSource.useBackCamera = useBackCamera
let videoTrack = self.factory.videoTrack(with: videoSource, trackId: "video")
return videoTrack
}
In the current version of WebRTC, RTCAVFoundationVideoSource has been deprecated and replaced with a
generic RTCVideoSource combined with an RTCVideoCapturer implementation.
In order to switch the camera I'm doing this:
- (void)switchCameraToPosition:(AVCaptureDevicePosition)position completionHandler:(void (^)(void))completionHandler {
if (self.cameraPosition != position) {
RTCMediaStream *localStream = self.peerConnection.localStreams.firstObject;
[localStream removeVideoTrack:self.localVideoTrack];
//[self.peerConnection removeStream:localStream];
self.localVideoTrack = [self createVideoTrack];
[self startCaptureLocalVideoWithPosition:position completionHandler:^{
[localStream addVideoTrack:self.localVideoTrack];
//[self.peerConnection addStream:localStream];
if (completionHandler) {
completionHandler();
}
}];
self.cameraPosition = position;
}
}
Take a look at the commented lines, If you start adding/removing the stream from the peer connection it will cause a delay in the video connection.
I'm using GoogleWebRTC-1.1.25102

Sit-Up counter using CMDeviceMotion

I'm trying to replicate a fitness app similar to Runtastic's Fitness Apps.
Sit-Ups
This our first app that uses the phone’s built-in accelerometer to detect movement. You need to hold the phone against your chest then sit up quickly enough and high enough for the accelerometer to register the movement and the app to count 1 sit-up. Be sure to do a proper sit-up by going high enough!
I did a prototype app similar to this question here and tried to implement a way to count sit-ups.
- (void)viewDidLoad {
[super viewDidLoad];
int count = 0;
motionManager = [[CMMotionManager alloc]init];
if (motionManager.deviceMotionAvailable)
{
motionManager.deviceMotionUpdateInterval = 0.1;
[motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue currentQueue] withHandler:^(CMDeviceMotion *motion, NSError *error) {
// Get the attitude of the device
CMAttitude *attitude = motion.attitude;
// Get the pitch (in radians) and convert to degrees.
double degree = attitude.pitch * 180.0/M_PI;
NSLog(#"%f", degree);
dispatch_async(dispatch_get_main_queue(), ^{
// Update some UI
if (degree >=75.0)
{
//it keeps counting if the condition is true!
count++;
self.lblCount.text = [NSString stringWithFormat:#"%i", count];
}
});
}];
NSLog(#"Device motion started");
}
else
{
NSLog(#"Device motion unavailable");
}
}
The if condition statement works, as if I place the device on my chest and do a proper sit-up, but the problem about this if statement is that it will just continue counting and I would want it to only count when the device has gone back to it's original position.
Can anyone come up with a logical implementation for this?
A simple boolean flag did the trick:
__block BOOL situp = NO;
if (!situp)
{
if (degree >=75.0)
{
count++;
self.lblCount.text = [NSString stringWithFormat:#"%i", count];
situp = YES;
}
}
else
{
if (degree <=10.0)
{
situp = NO;
}
}
Not the best logical implementation here, but it gets the job done...

Copying Swift arrays from background to foreground

If we go from Swift background to foreground, what is the proper way to [nsObject copy] in Swift?
For example in Objective-C, we would loop through a long array of ALAssets (say like 10,000+) in the background by doing:
[alGroup enumerateAssetsUsingBlock:^(ALAsset *alPhoto, NSUInteger index, BOOL *stop)
{
// Here to make changes for speed up image loading from device library...
// =====================================================
// >>>>>>>>>>>>>>>>>>> IN BACKGROUND <<<<<<<<<<<<<<<<<<<
// =====================================================
if(alPhoto == nil)
{
c(#"number of assets to display: %d", (int)bgAssetMedia.count);
// c(#"All device library photos uploaded into memory...%#", bgAssetMedia);
dispatch_async(dispatch_get_main_queue(), ^(void)
{
// =====================================================
// >>>>>>>>>>>>>>>>>>> IN FOREGROUND <<<<<<<<<<<<<<<<<<<
// =====================================================
[ui hideSpinner];
if (_bReverse)
// Here we copying all the photos from device library into array (_assetPhotos)...
_assetPhotos = [[NSMutableArray alloc] initWithArray:[[[bgAssetMedia copy] reverseObjectEnumerator] allObjects]];
else
_assetPhotos = [[NSMutableArray alloc] initWithArray:[bgAssetMedia copy]];
// NSLog(#"%lu",(unsigned long)_assetPhotos.count);
if (_assetPhotos.count > 0)
{
result(_assetPhotos);
}
});
} else {
// if we have a Custom album, lets remove all shared videos from the Camera Roll
if (![self isPhotoInCustomAlbum:alPhoto])
{
// for some reason, shared glancy videos still show with 00:00 minutes and seconds, so remove them now
BOOL isVideo = [[alPhoto valueForProperty:ALAssetPropertyType] isEqual:ALAssetTypeVideo];
int duration = 0;
int minutes = 0;
int seconds = 0;
// NSString *bgVideoLabel = nil;
if (isVideo)
{
NSString *strduration = [alPhoto valueForProperty:ALAssetPropertyDuration];
duration = [strduration intValue];
minutes = duration / 60;
seconds = duration % 60;
// bgVideoLabel = [NSString stringWithFormat:#"%d:%02d", minutes, seconds];
if (minutes > 0 || seconds > 0)
{
[bgAssetMedia addObject:alPhoto];
}
} else {
[bgAssetMedia addObject:alPhoto];
}
}
}
// NSLog(#"%lu",(unsigned long)bgAssetMedia.count);
}];
Then, we would switch to the foreground to update the UIViewController, which are these lines in the above snippet:
_assetPhotos = [[NSMutableArray alloc] initWithArray:[bgAssetMedia copy]];
The "copy" function was the black magic that allowed us to quickly marshal the memory from background to foreground without having to loop through array again.
Is there a similar method in Swift? Perhaps something like this:
_assetPhotos = NSMutableArray(array: bgAssetMedia.copy())
Is Swift thread safe now for passing memory pointers from background to foreground? What's the new protocol? Thank you-
I found the answer. After running large queries on the Realm and CoreData database contexts. I found it easy to just make a basic copy of the memory pointer and downcast it to match the class.
let mediaIdFG = mediaId.copy() as! String
Full example in context below:
static func createOrUpdate(dictionary:NSDictionary) -> Promise<Media> {
// Query and update from any thread
return Promise { fulfill, reject in
executeInBackground {
// c("BG media \(dictionary)")
let realm:RLMRealm = RLMRealm.defaultRealm()
realm.beginWriteTransaction()
let media = Media.createOrUpdateInRealm(realm, withJSONDictionary:dictionary as [NSObject : AnyObject])
// media.type = type
c("BG media \(media)")
let mediaId = media.localIdentifier
do {
try realm.commitWriteTransaction()
executeInForeground({
let mediaIdFG = mediaId.copy() as! String
let newMedia = Media.findOneByLocalIdentifier(mediaIdFG)
c("FG \(mediaIdFG) newMedia \(newMedia)")
fulfill(newMedia)
})
} catch {
reject( Constants.createError("Realm Something went wrong!") )
}
}
} // return promise
} // func createOrUpdate
Posting my own answer to let you know my findings. I also found this helpful article about Swift's copy() aka objc's copyWithZone: https://www.hackingwithswift.com/example-code/system/how-to-copy-objects-in-swift-using-copy

Resources