Determine the Device Orientation and its resulting Angle on one Dimension? - ios

I have the following setup:
An iPhone lies with the display to the ceiling on a table (alpha = 0 degrees). When the iPhone is moved upwards like shown in the image above the alpha angle increases.
How do I compute the value of the alpha angle without taking care of any other axes which could change. I am only interested in this one axis.
How do I get the correct alpha angle the iPhone has when lifting up from the table? How do I get notified when the value of alpha changes?

You can use the CMMotionManager class to monitor device motion changes.
Objective C
// Ensure to keep a strong reference to the motion manager otherwise you won't get updates
self.motionManager = [[CMMotionManager alloc] init];
if (self.motionManager.deviceMotionAvailable) {
self.motionManager.deviceMotionUpdateInterval = 0.1;
// For use in the montionManager's handler to prevent strong reference cycle
__weak typeof(self) weakSelf = self;
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
[self.motionManager startDeviceMotionUpdatesToQueue:queue
withHandler:^(CMDeviceMotion *motion, NSError *error) {
// Get the attitude of the device
CMAttitude *attitude = motion.attitude;
// Get the pitch (in radians) and convert to degrees.
NSLog(#"%f", attitude.pitch * 180.0/M_PI);
dispatch_async(dispatch_get_main_queue(), ^{
// Update some UI
});
}];
NSLog(#"Device motion started");
}else {
NSLog(#"Device motion unavailable");
}
Swift
// Ensure to keep a strong reference to the motion manager otherwise you won't get updates
self.motionManager = CMMotionManager()
if motionManager?.deviceMotionAvailable == true {
motionManager?.deviceMotionUpdateInterval = 0.1
let queue = OperationQueue()
motionManager?.startDeviceMotionUpdatesToQueue(queue, withHandler: { [weak self] motion, error in
// Get the attitude of the device
if let attitude = motion?.attitude {
// Get the pitch (in radians) and convert to degrees.
print(attitude.pitch * 180.0/Double.pi)
DispatchQueue.main.async {
// Update some UI
}
}
})
print("Device motion started")
}else {
print("Device motion unavailable")
}
NSHipster is (as always) a great source of information, and the article on CMDeviceMotion is no exception.

Swift 4:
let motionManager = CMMotionManager()
if motionManager.isDeviceMotionAvailable {
motionManager.deviceMotionUpdateInterval = 0.1
motionManager.startDeviceMotionUpdates(to: OperationQueue()) { [weak self] (motion, error) -> Void in
if let attitude = motion?.attitude {
print(attitude.pitch * 180 / Double.pi)
DispatchQueue.main.async{
// Update UI
}
}
}
print("Device motion started")
}
else {
print("Device motion unavailable")
}

Related

DJI SDK waypoint Missions - swift, IOS

I want to program a system where co-ordinates can be passed to the drone as waypoints and the drone will carry out the actions. The DJI API is documented with OBJ-c and while the concepts will be the same im struggling to understand how a mission is programmed.
If someone can help me with the basic structure of a waypoint mission and passing this to the drone it would be very helpful. Maybe I'm not understanding things well but the DJI API doesn't seem to be very descriptive of how things work.
I'm not asking to be spoon fed but someone with insight who could give me an explanation
This is how I wrote the waypoint function with swift and you can read code comments from below to understand the mission specification!
/// Build a waypoint function that allows the drone to move between points according to its altitude, longitude, and latitude
func waypointMission() -> DJIWaypointMission? {
/// Define a new object class for the waypoint mission
let mission = DJIMutableWaypointMission()
mission.maxFlightSpeed = 15
mission.autoFlightSpeed = 8
mission.finishedAction = .noAction
mission.headingMode = .usingInitialDirection
mission.flightPathMode = .normal
mission.rotateGimbalPitch = false /// Change this to True if you want the camera gimbal pitch to move between waypoints
mission.exitMissionOnRCSignalLost = true
mission.gotoFirstWaypointMode = .pointToPoint
mission.repeatTimes = 1
/// Keep listening to the drone location included in latitude and longitude
guard let droneLocationKey = DJIFlightControllerKey(param: DJIFlightControllerParamAircraftLocation) else {
return nil
}
guard let droneLocationValue = DJISDKManager.keyManager()?.getValueFor(droneLocationKey) else {
return nil
}
let droneLocation = droneLocationValue.value as! CLLocation
let droneCoordinates = droneLocation.coordinate
/// Check if the returned coordinate value is valid or not
if !CLLocationCoordinate2DIsValid(droneCoordinates) {
return nil
}
mission.pointOfInterest = droneCoordinates
let loc1 = CLLocationCoordinate2DMake(droneCoordinates.latitude, droneCoordinates.longitude)
let waypoint1 = DJIWaypoint(coordinate: loc1)
waypoint1.altitude = 2.0 /// The altitude which the drone flies to as the first point and should be of type float
waypoint1.heading = 0 /// This is between [-180, 180] degrees, where the drone moves when reaching a waypoint. 0 means don't change the drone's heading
waypoint1.actionRepeatTimes = 1 /// Repeat this mission just for one time
waypoint1.actionTimeoutInSeconds = 60
// waypoint1.cornerRadiusInMeters = 5
waypoint1.turnMode = .clockwise /// When the drones changing its heading. It moves clockwise
waypoint1.gimbalPitch = 0 /// This is between [-90, 0] degrees, if you want to change this value, then change rotateGimbalPitch to True. The drone gimbal will move by the value when the drone reaches its waypoint
waypoint1.speed = 0.5 /// Note that this value does not make the drone move with speed 0.5 m/s because this is an error from the firmware and can't be fixed. However, we need to trigger it to be able to change the next one
let loc2 = CLLocationCoordinate2DMake(droneCoordinates.latitude, droneCoordinates.longitude)
let waypoint2 = DJIWaypoint(coordinate: loc2)
waypoint2.altitude = 15.0 /// should be of type float
waypoint2.heading = 0
waypoint2.actionRepeatTimes = 1
waypoint2.actionTimeoutInSeconds = 60
//waypoint2.cornerRadiusInMeters = 5
waypoint2.turnMode = .clockwise
waypoint2.gimbalPitch = 0
/// Chnage the velocity of the drone while moving to this waypoint
waypoint2.speed = 0.5
mission.add(waypoint1)
mission.add(waypoint2)
return DJIWaypointMission(mission: mission)
}
Now when you call the previous function, you need to pass it to the timeline of the mission, and here is an example flow of a full timeline mission
/// Set up the drone misison and strart its timeline
func goUpVerticallyMission() {
/// Check if the drone is connected
let product = DJISDKManager.product()
if (product?.model) != nil {
/// This is the array that holds all the timline elements to be executed later in order
var elements = [DJIMissionControlTimelineElement]()
/// Reset Gimbal Position
let attitude = DJIGimbalAttitude(pitch: 0.0, roll: 0.0, yaw: 0.0)
let pitchAction: DJIGimbalAttitudeAction = DJIGimbalAttitudeAction(attitude: attitude)!
elements.append(pitchAction) // task number 0
let takeOff = DJITakeOffAction()
elements.append(takeOff) // task number 1
/// Set up and start a new waypoint mission
var mission: DJIWaypointMission?
guard let result = self.waypointMission() else {return}
mission = result
elements.append(mission!) // task number 2
/// After the waypoint mission finishes stop recording the video
let stopVideoAction = DJIRecordVideoAction(stopRecordVideo: ())
elements.append(stopVideoAction!) // task number 3
/// This is landing action after finishing the task
// let landAction = DJILandAction()
// elements.append(landAction)
/// This is the go home and landing action in which the drone goes back to its starting point when the first time started the mission
let goHomeLandingAction = DJIGoHomeAction()
elements.append(goHomeLandingAction) /// task number 4
/// Check if there is any error while appending the timeline mission to the array.
let error = DJISDKManager.missionControl()?.scheduleElements(elements)
if error != nil {
print("Error detected with the mission")
} else {
/// If there is no error then start the mission
DispatchQueue.main.asyncAfter(deadline: .now()) {
DJISDKManager.missionControl()?.startTimeline()
}
}
}
}
Please let me know if you misunderstand anything or you need any help with other things!!!
Look at the dji example app on github:
https://github.com/dji-sdk/Mobile-SDK-iOS/tree/master/Sample%20Code/ObjcSampleCode/DJISdkDemo/Demo/MissionManager
//
// WaypointMissionViewController.m
// DJISdkDemo
//
// Copyright © 2015 DJI. All rights reserved.
//
/**
* This file demonstrates the process to start a waypoint mission. In this demo,
* the aircraft will go to four waypoints, shoot photos and record videos.
* The flight speed can be controlled by calling the class method
* setAutoFlightSpeed:withCompletion:. In this demo, when the aircraft will
* change the speed right after it reaches the second point (point with index 1).
*
* CAUTION: it is highly recommended to run this sample using the simulator.
*/
#import <DJISDK/DJISDK.h>
#import "DemoUtility.h"
#import "WaypointMissionViewController.h"
#define ONE_METER_OFFSET (0.00000901315)
#interface WaypointMissionViewController ()
#property (nonatomic) DJIWaypointMissionOperator *wpOperator;
#property (nonatomic) DJIWaypointMission *downloadMission;
#end
#implementation WaypointMissionViewController
-(void)viewWillAppear:(BOOL)animated {
[super viewWillAppear:animated];
self.wpOperator = [[DJISDKManager missionControl] waypointMissionOperator];
}
-(void)viewWillDisappear:(BOOL)animated {
[super viewWillDisappear:animated];
[self.wpOperator removeListenerOfExecutionEvents:self];
}
/**
* Because waypoint mission is refactored and uses a different interface design
* from the other missions, we need to override the UI actions.
*/
-(void)onPrepareButtonClicked:(id)sender {
DJIWaypointMission *wp = (DJIWaypointMission *)[self initializeMission];
NSError *error = [self.wpOperator loadMission:wp];
if (error) {
ShowResult(#"Prepare Mission Failed:%#", error);
return;
}
WeakRef(target);
[self.wpOperator addListenerToUploadEvent:self withQueue:nil andBlock:^(DJIWaypointMissionUploadEvent * _Nonnull event) {
WeakReturn(target);
[target onUploadEvent:event];
}];
[self.wpOperator uploadMissionWithCompletion:^(NSError * _Nullable error) {
WeakReturn(target);
if (error) {
ShowResult(#"ERROR: uploadMission:withCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: uploadMission:withCompletion:.");
}
}];
}
- (IBAction)onStartButtonClicked:(id)sender {
WeakRef(target);
[self.wpOperator addListenerToExecutionEvent:self withQueue:nil andBlock:^(DJIWaypointMissionExecutionEvent * _Nonnull event) {
[target showWaypointMissionProgress:event];
}];
[self.wpOperator startMissionWithCompletion:^(NSError * _Nullable error) {
if (error) {
ShowResult(#"ERROR: startMissionWithCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: startMissionWithCompletion:. ");
}
[self missionDidStart:error];
}];
}
-(void)onStopButtonClicked:(id)sender {
WeakRef(target);
[self.wpOperator stopMissionWithCompletion:^(NSError * _Nullable error) {
WeakReturn(target);
if (error) {
ShowResult(#"ERROR: stopMissionExecutionWithCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: stopMissionExecutionWithCompletion:. ");
}
[target missionDidStop:error];
}];
}
-(void)onDownloadButtonClicked:(id)sender {
self.downloadMission = nil;
WeakRef(target);
[self.wpOperator addListenerToDownloadEvent:self
withQueue:nil
andBlock:^(DJIWaypointMissionDownloadEvent * _Nonnull event)
{
if (event.progress.downloadedWaypointIndex == event.progress.totalWaypointCount) {
ShowResult(#"SUCCESS: the waypoint mission is downloaded. ");
target.downloadMission = target.wpOperator.loadedMission;
[target.wpOperator removeListenerOfDownloadEvents:target];
[target.progressBar setHidden:YES];
[target mission:target.downloadMission didDownload:event.error];
}
else if (event.error) {
ShowResult(#"Download Mission Failed:%#", event.error);
[target.progressBar setHidden:YES];
[target mission:target.downloadMission didDownload:event.error];
[target.wpOperator removeListenerOfDownloadEvents:target];
} else {
[target.progressBar setHidden:NO];
float progress = ((float)event.progress.downloadedWaypointIndex + 1) / (float)event.progress.totalWaypointCount;
NSLog(#"Download Progress:%d%%", (int)(progress*100));
[target.progressBar setProgress:progress];
}
}];
[self.wpOperator downloadMissionWithCompletion:^(NSError * _Nullable error) {
WeakReturn(target);
if (error) {
ShowResult(#"ERROR: downloadMissionWithCompletion:withCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: downloadMissionWithCompletion:withCompletion:.");
}
}];
}
-(void)onPauseButtonClicked:(id)sender {
[self missionWillPause];
[self.wpOperator pauseMissionWithCompletion:^(NSError * _Nullable error) {
if (error) {
ShowResult(#"ERROR: pauseMissionWithCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: pauseMissionWithCompletion:. ");
}
}];
}
-(void)onResumeButtonClicked:(id)sender {
WeakRef(target);
[self.wpOperator resumeMissionWithCompletion:^(NSError * _Nullable error) {
WeakReturn(target);
if (error) {
ShowResult(#"ERROR: resumeMissionWithCompletion:. %#", error.description);
}
else {
ShowResult(#"SUCCESS: resumeMissionWithCompletion:. ");
}
[target missionDidResume:error];
}];
}
/**
* Prepare the waypoint mission. The basic workflow is:
* 1. Create an instance of DJIWaypointMission.
* 2. Create coordinates.
* 3. Use the coordinate to create an instance of DJIWaypoint.
* 4. Add actions for each waypoint.
* 5. Add the waypoints into the mission.
*/
-(DJIMission*) initializeMission {
// Step 1: create mission
DJIMutableWaypointMission* mission = [[DJIMutableWaypointMission alloc] init];
mission.maxFlightSpeed = 15.0;
mission.autoFlightSpeed = 4.0;
// Step 2: prepare coordinates
CLLocationCoordinate2D northPoint;
CLLocationCoordinate2D eastPoint;
CLLocationCoordinate2D southPoint;
CLLocationCoordinate2D westPoint;
northPoint = CLLocationCoordinate2DMake(self.homeLocation.latitude + 10 * ONE_METER_OFFSET, self.homeLocation.longitude);
eastPoint = CLLocationCoordinate2DMake(self.homeLocation.latitude, self.homeLocation.longitude + 10 * ONE_METER_OFFSET);
southPoint = CLLocationCoordinate2DMake(self.homeLocation.latitude - 10 * ONE_METER_OFFSET, self.homeLocation.longitude);
westPoint = CLLocationCoordinate2DMake(self.homeLocation.latitude, self.homeLocation.longitude - 10 * ONE_METER_OFFSET);
// Step 3: create waypoints
DJIWaypoint* northWP = [[DJIWaypoint alloc] initWithCoordinate:northPoint];
northWP.altitude = 10.0;
DJIWaypoint* eastWP = [[DJIWaypoint alloc] initWithCoordinate:eastPoint];
eastWP.altitude = 20.0;
DJIWaypoint* southWP = [[DJIWaypoint alloc] initWithCoordinate:southPoint];
southWP.altitude = 30.0;
DJIWaypoint* westWP = [[DJIWaypoint alloc] initWithCoordinate:westPoint];
westWP.altitude = 40.0;
// Step 4: add actions
[northWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeRotateGimbalPitch param:-60]];
[northWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeShootPhoto param:0]];
[eastWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeShootPhoto param:0]];
[southWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeRotateAircraft param:60]];
[southWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeStartRecord param:0]];
[westWP addAction:[[DJIWaypointAction alloc] initWithActionType:DJIWaypointActionTypeStopRecord param:0]];
// Step 5: add waypoints into the mission
[mission addWaypoint:northWP];
[mission addWaypoint:eastWP];
[mission addWaypoint:southWP];
[mission addWaypoint:westWP];
return mission;
}
- (void)onUploadEvent:(DJIWaypointMissionUploadEvent *) event
{
if (event.currentState == DJIWaypointMissionStateReadyToExecute) {
ShowResult(#"SUCCESS: the whole waypoint mission is uploaded.");
[self.progressBar setHidden:YES];
[self.wpOperator removeListenerOfUploadEvents:self];
}
else if (event.error) {
ShowResult(#"ERROR: waypoint mission uploading failed. %#", event.error.description);
[self.progressBar setHidden:YES];
[self.wpOperator removeListenerOfUploadEvents:self];
}
else if (event.currentState == DJIWaypointMissionStateReadyToUpload ||
event.currentState == DJIWaypointMissionStateNotSupported ||
event.currentState == DJIWaypointMissionStateDisconnected) {
ShowResult(#"ERROR: waypoint mission uploading failed. %#", event.error.description);
[self.progressBar setHidden:YES];
[self.wpOperator removeListenerOfUploadEvents:self];
} else if (event.currentState == DJIWaypointMissionStateUploading) {
[self.progressBar setHidden:NO];
DJIWaypointUploadProgress *progress = event.progress;
float progressInPercent = progress.uploadedWaypointIndex / progress.totalWaypointCount;
[self.progressBar setProgress:progressInPercent];
}
}
-(void) showWaypointMissionProgress:(DJIWaypointMissionExecutionEvent *)event {
NSMutableString* statusStr = [NSMutableString new];
[statusStr appendFormat:#"previousState:%#\n", [[self class] descriptionForMissionState:event.previousState]];
[statusStr appendFormat:#"currentState:%#\n", [[self class] descriptionForMissionState:event.currentState]];
[statusStr appendFormat:#"Target Waypoint Index: %zd\n", (long)event.progress.targetWaypointIndex];
[statusStr appendString:[NSString stringWithFormat:#"Is Waypoint Reached: %#\n",
event.progress.isWaypointReached ? #"YES" : #"NO"]];
[statusStr appendString:[NSString stringWithFormat:#"Execute State: %#\n", [[self class] descriptionForExecuteState:event.progress.execState]]];
if (event.error) {
[statusStr appendString:[NSString stringWithFormat:#"Execute Error: %#", event.error.description]];
[self.wpOperator removeListenerOfExecutionEvents:self];
}
[self.statusLabel setText:statusStr];
}
/**
* Display the information of the mission if it is downloaded successfully.
*/
-(void)mission:(DJIMission *)mission didDownload:(NSError *)error {
if (error) return;
if ([mission isKindOfClass:[DJIWaypointMission class]]) {
// Display information of waypoint mission.
[self showWaypointMission:(DJIWaypointMission*)mission];
}
}
-(void) showWaypointMission:(DJIWaypointMission*)wpMission {
NSMutableString* missionInfo = [NSMutableString stringWithString:#"The waypoint mission is downloaded successfully: \n"];
[missionInfo appendString:[NSString stringWithFormat:#"RepeatTimes: %zd\n", wpMission.repeatTimes]];
[missionInfo appendString:[NSString stringWithFormat:#"HeadingMode: %u\n", (unsigned int)wpMission.headingMode]];
[missionInfo appendString:[NSString stringWithFormat:#"FinishedAction: %u\n", (unsigned int)wpMission.finishedAction]];
[missionInfo appendString:[NSString stringWithFormat:#"FlightPathMode: %u\n", (unsigned int)wpMission.flightPathMode]];
[missionInfo appendString:[NSString stringWithFormat:#"MaxFlightSpeed: %f\n", wpMission.maxFlightSpeed]];
[missionInfo appendString:[NSString stringWithFormat:#"AutoFlightSpeed: %f\n", wpMission.autoFlightSpeed]];
[missionInfo appendString:[NSString stringWithFormat:#"There are %zd waypoint(s). ", wpMission.waypointCount]];
[self.statusLabel setText:missionInfo];
}
+(NSString *)descriptionForMissionState:(DJIWaypointMissionState)state {
switch (state) {
case DJIWaypointMissionStateUnknown:
return #"Unknown";
case DJIWaypointMissionStateExecuting:
return #"Executing";
case DJIWaypointMissionStateUploading:
return #"Uploading";
case DJIWaypointMissionStateRecovering:
return #"Recovering";
case DJIWaypointMissionStateDisconnected:
return #"Disconnected";
case DJIWaypointMissionStateNotSupported:
return #"NotSupported";
case DJIWaypointMissionStateReadyToUpload:
return #"ReadyToUpload";
case DJIWaypointMissionStateReadyToExecute:
return #"ReadyToExecute";
case DJIWaypointMissionStateExecutionPaused:
return #"ExecutionPaused";
}
return #"Unknown";
}
+(NSString *)descriptionForExecuteState:(DJIWaypointMissionExecuteState)state {
switch (state) {
case DJIWaypointMissionExecuteStateInitializing:
return #"Initializing";
break;
case DJIWaypointMissionExecuteStateMoving:
return #"Moving";
case DJIWaypointMissionExecuteStatePaused:
return #"Paused";
case DJIWaypointMissionExecuteStateBeginAction:
return #"BeginAction";
case DJIWaypointMissionExecuteStateDoingAction:
return #"Doing Action";
case DJIWaypointMissionExecuteStateFinishedAction:
return #"Finished Action";
case DJIWaypointMissionExecuteStateCurveModeMoving:
return #"CurveModeMoving";
case DJIWaypointMissionExecuteStateCurveModeTurning:
return #"CurveModeTurning";
case DJIWaypointMissionExecuteStateReturnToFirstWaypoint:
return #"Return To first Point";
default:
break;
}
return #"Unknown";
}
#end

iOS SWIFT - WebRTC change from Front Camera to back Camera

WebRTC video by default uses Front Camera, which works fine. However, i need to switch it to back camera, and i have not been able to find any code to do that.
Which part do i need to edit?
Is it the localView or localVideoTrack or capturer?
Swift 3.0
Peer connection can have only one 'RTCVideoTrack' for sending video stream.
At first, for change camera front/back you must remove current video track on peer connection.
After then, you create new 'RTCVideoTrack' on camera which you need, and set this for peer connection.
I used this methods.
func swapCameraToFront() {
let localStream: RTCMediaStream? = peerConnection?.localStreams.first as? RTCMediaStream
localStream?.removeVideoTrack(localStream?.videoTracks.first as! RTCVideoTrack)
let localVideoTrack: RTCVideoTrack? = createLocalVideoTrack()
if localVideoTrack != nil {
localStream?.addVideoTrack(localVideoTrack)
delegate?.appClient(self, didReceiveLocalVideoTrack: localVideoTrack!)
}
peerConnection?.remove(localStream)
peerConnection?.add(localStream)
}
func swapCameraToBack() {
let localStream: RTCMediaStream? = peerConnection?.localStreams.first as? RTCMediaStream
localStream?.removeVideoTrack(localStream?.videoTracks.first as! RTCVideoTrack)
let localVideoTrack: RTCVideoTrack? = createLocalVideoTrackBackCamera()
if localVideoTrack != nil {
localStream?.addVideoTrack(localVideoTrack)
delegate?.appClient(self, didReceiveLocalVideoTrack: localVideoTrack!)
}
peerConnection?.remove(localStream)
peerConnection?.add(localStream)
}
As of now I only have the answer in Objective C language in regard to Ankit's comment below. I will convert it into Swift after some time.
You can check the below code
- (RTCVideoTrack *)createLocalVideoTrack {
RTCVideoTrack *localVideoTrack = nil;
NSString *cameraID = nil;
for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if (captureDevice.position == AVCaptureDevicePositionFront) {
cameraID = [captureDevice localizedName]; break;
}
}
RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:cameraID];
RTCMediaConstraints *mediaConstraints = [self defaultMediaStreamConstraints];
RTCVideoSource *videoSource = [_factory videoSourceWithCapturer:capturer constraints:mediaConstraints];
localVideoTrack = [_factory videoTrackWithID:#"ARDAMSv0" source:videoSource];
return localVideoTrack;
}
- (RTCVideoTrack *)createLocalVideoTrackBackCamera {
RTCVideoTrack *localVideoTrack = nil;
//AVCaptureDevicePositionFront
NSString *cameraID = nil;
for (AVCaptureDevice *captureDevice in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {
if (captureDevice.position == AVCaptureDevicePositionBack) {
cameraID = [captureDevice localizedName];
break;
}
}
RTCVideoCapturer *capturer = [RTCVideoCapturer capturerWithDeviceName:cameraID];
RTCMediaConstraints *mediaConstraints = [self defaultMediaStreamConstraints];
RTCVideoSource *videoSource = [_factory videoSourceWithCapturer:capturer constraints:mediaConstraints];
localVideoTrack = [_factory videoTrackWithID:#"ARDAMSv0" source:videoSource];
return localVideoTrack;
}
If you decide to use official Google build here the explanation:
First, you must configure your camera before call start, best place to do that in ARDVideoCallViewDelegate in method didCreateLocalCapturer
- (void)startCapture:(void (^)(BOOL succeeded))completionHandler {
AVCaptureDevicePosition position = _usingFrontCamera ? AVCaptureDevicePositionFront : AVCaptureDevicePositionBack;
__weak AVCaptureDevice *device = [self findDeviceForPosition:position];
if ([device lockForConfiguration:nil]) {
if ([device isFocusPointOfInterestSupported]) {
[device setFocusModeLockedWithLensPosition:0.9 completionHandler: nil];
}
}
AVCaptureDeviceFormat *format = [self selectFormatForDevice:device];
if (format == nil) {
RTCLogError(#"No valid formats for device %#", device);
NSAssert(NO, #"");
return;
}
NSInteger fps = [self selectFpsForFormat:format];
[_capturer startCaptureWithDevice: device
format: format
fps:fps completionHandler:^(NSError * error) {
NSLog(#"%#",error);
if (error == nil) {
completionHandler(true);
}
}];
}
Don't forget enabling capture device is asynchronous, sometime better to use completion to be sure everything done as expected.
I am not sure which chrome version you are using for webrtc but with v54 and above there is "bool" property called "useBackCamera" in RTCAVFoundationVideoSource class. You can make use of this property to switch between front/back camera.
Swift 4.0 & 'GoogleWebRTC' : '1.1.20913'
RTCAVFoundationVideoSource class has a property named useBackCamera that can be used for switching the camera used.
#interface RTCAVFoundationVideoSource : RTCVideoSource
- (instancetype)init NS_UNAVAILABLE;
/**
* Calling this function will cause frames to be scaled down to the
* requested resolution. Also, frames will be cropped to match the
* requested aspect ratio, and frames will be dropped to match the
* requested fps. The requested aspect ratio is orientation agnostic and
* will be adjusted to maintain the input orientation, so it doesn't
* matter if e.g. 1280x720 or 720x1280 is requested.
*/
- (void)adaptOutputFormatToWidth:(int)width height:(int)height fps:(int)fps;
/** Returns whether rear-facing camera is available for use. */
#property(nonatomic, readonly) BOOL canUseBackCamera;
/** Switches the camera being used (either front or back). */
#property(nonatomic, assign) BOOL useBackCamera;
/** Returns the active capture session. */
#property(nonatomic, readonly) AVCaptureSession *captureSession;
Below is the implementation for switching camera.
var useBackCamera: Bool = false
func switchCamera() {
useBackCamera = !useBackCamera
self.switchCamera(useBackCamera: useBackCamera)
}
private func switchCamera(useBackCamera: Bool) -> Void {
let localStream = peerConnection?.localStreams.first
if let videoTrack = localStream?.videoTracks.first {
localStream?.removeVideoTrack(videoTrack)
}
let localVideoTrack = createLocalVideoTrack(useBackCamera: useBackCamera)
localStream?.addVideoTrack(localVideoTrack)
self.delegate?.webRTCClientDidAddLocal(videoTrack: localVideoTrack)
if let ls = localStream {
peerConnection?.remove(ls)
peerConnection?.add(ls)
}
}
func createLocalVideoTrack(useBackCamera: Bool) -> RTCVideoTrack {
let videoSource = self.factory.avFoundationVideoSource(with: self.constraints)
videoSource.useBackCamera = useBackCamera
let videoTrack = self.factory.videoTrack(with: videoSource, trackId: "video")
return videoTrack
}
In the current version of WebRTC, RTCAVFoundationVideoSource has been deprecated and replaced with a
generic RTCVideoSource combined with an RTCVideoCapturer implementation.
In order to switch the camera I'm doing this:
- (void)switchCameraToPosition:(AVCaptureDevicePosition)position completionHandler:(void (^)(void))completionHandler {
if (self.cameraPosition != position) {
RTCMediaStream *localStream = self.peerConnection.localStreams.firstObject;
[localStream removeVideoTrack:self.localVideoTrack];
//[self.peerConnection removeStream:localStream];
self.localVideoTrack = [self createVideoTrack];
[self startCaptureLocalVideoWithPosition:position completionHandler:^{
[localStream addVideoTrack:self.localVideoTrack];
//[self.peerConnection addStream:localStream];
if (completionHandler) {
completionHandler();
}
}];
self.cameraPosition = position;
}
}
Take a look at the commented lines, If you start adding/removing the stream from the peer connection it will cause a delay in the video connection.
I'm using GoogleWebRTC-1.1.25102

Sit-Up counter using CMDeviceMotion

I'm trying to replicate a fitness app similar to Runtastic's Fitness Apps.
Sit-Ups
This our first app that uses the phone’s built-in accelerometer to detect movement. You need to hold the phone against your chest then sit up quickly enough and high enough for the accelerometer to register the movement and the app to count 1 sit-up. Be sure to do a proper sit-up by going high enough!
I did a prototype app similar to this question here and tried to implement a way to count sit-ups.
- (void)viewDidLoad {
[super viewDidLoad];
int count = 0;
motionManager = [[CMMotionManager alloc]init];
if (motionManager.deviceMotionAvailable)
{
motionManager.deviceMotionUpdateInterval = 0.1;
[motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue currentQueue] withHandler:^(CMDeviceMotion *motion, NSError *error) {
// Get the attitude of the device
CMAttitude *attitude = motion.attitude;
// Get the pitch (in radians) and convert to degrees.
double degree = attitude.pitch * 180.0/M_PI;
NSLog(#"%f", degree);
dispatch_async(dispatch_get_main_queue(), ^{
// Update some UI
if (degree >=75.0)
{
//it keeps counting if the condition is true!
count++;
self.lblCount.text = [NSString stringWithFormat:#"%i", count];
}
});
}];
NSLog(#"Device motion started");
}
else
{
NSLog(#"Device motion unavailable");
}
}
The if condition statement works, as if I place the device on my chest and do a proper sit-up, but the problem about this if statement is that it will just continue counting and I would want it to only count when the device has gone back to it's original position.
Can anyone come up with a logical implementation for this?
A simple boolean flag did the trick:
__block BOOL situp = NO;
if (!situp)
{
if (degree >=75.0)
{
count++;
self.lblCount.text = [NSString stringWithFormat:#"%i", count];
situp = YES;
}
}
else
{
if (degree <=10.0)
{
situp = NO;
}
}
Not the best logical implementation here, but it gets the job done...

Copying Swift arrays from background to foreground

If we go from Swift background to foreground, what is the proper way to [nsObject copy] in Swift?
For example in Objective-C, we would loop through a long array of ALAssets (say like 10,000+) in the background by doing:
[alGroup enumerateAssetsUsingBlock:^(ALAsset *alPhoto, NSUInteger index, BOOL *stop)
{
// Here to make changes for speed up image loading from device library...
// =====================================================
// >>>>>>>>>>>>>>>>>>> IN BACKGROUND <<<<<<<<<<<<<<<<<<<
// =====================================================
if(alPhoto == nil)
{
c(#"number of assets to display: %d", (int)bgAssetMedia.count);
// c(#"All device library photos uploaded into memory...%#", bgAssetMedia);
dispatch_async(dispatch_get_main_queue(), ^(void)
{
// =====================================================
// >>>>>>>>>>>>>>>>>>> IN FOREGROUND <<<<<<<<<<<<<<<<<<<
// =====================================================
[ui hideSpinner];
if (_bReverse)
// Here we copying all the photos from device library into array (_assetPhotos)...
_assetPhotos = [[NSMutableArray alloc] initWithArray:[[[bgAssetMedia copy] reverseObjectEnumerator] allObjects]];
else
_assetPhotos = [[NSMutableArray alloc] initWithArray:[bgAssetMedia copy]];
// NSLog(#"%lu",(unsigned long)_assetPhotos.count);
if (_assetPhotos.count > 0)
{
result(_assetPhotos);
}
});
} else {
// if we have a Custom album, lets remove all shared videos from the Camera Roll
if (![self isPhotoInCustomAlbum:alPhoto])
{
// for some reason, shared glancy videos still show with 00:00 minutes and seconds, so remove them now
BOOL isVideo = [[alPhoto valueForProperty:ALAssetPropertyType] isEqual:ALAssetTypeVideo];
int duration = 0;
int minutes = 0;
int seconds = 0;
// NSString *bgVideoLabel = nil;
if (isVideo)
{
NSString *strduration = [alPhoto valueForProperty:ALAssetPropertyDuration];
duration = [strduration intValue];
minutes = duration / 60;
seconds = duration % 60;
// bgVideoLabel = [NSString stringWithFormat:#"%d:%02d", minutes, seconds];
if (minutes > 0 || seconds > 0)
{
[bgAssetMedia addObject:alPhoto];
}
} else {
[bgAssetMedia addObject:alPhoto];
}
}
}
// NSLog(#"%lu",(unsigned long)bgAssetMedia.count);
}];
Then, we would switch to the foreground to update the UIViewController, which are these lines in the above snippet:
_assetPhotos = [[NSMutableArray alloc] initWithArray:[bgAssetMedia copy]];
The "copy" function was the black magic that allowed us to quickly marshal the memory from background to foreground without having to loop through array again.
Is there a similar method in Swift? Perhaps something like this:
_assetPhotos = NSMutableArray(array: bgAssetMedia.copy())
Is Swift thread safe now for passing memory pointers from background to foreground? What's the new protocol? Thank you-
I found the answer. After running large queries on the Realm and CoreData database contexts. I found it easy to just make a basic copy of the memory pointer and downcast it to match the class.
let mediaIdFG = mediaId.copy() as! String
Full example in context below:
static func createOrUpdate(dictionary:NSDictionary) -> Promise<Media> {
// Query and update from any thread
return Promise { fulfill, reject in
executeInBackground {
// c("BG media \(dictionary)")
let realm:RLMRealm = RLMRealm.defaultRealm()
realm.beginWriteTransaction()
let media = Media.createOrUpdateInRealm(realm, withJSONDictionary:dictionary as [NSObject : AnyObject])
// media.type = type
c("BG media \(media)")
let mediaId = media.localIdentifier
do {
try realm.commitWriteTransaction()
executeInForeground({
let mediaIdFG = mediaId.copy() as! String
let newMedia = Media.findOneByLocalIdentifier(mediaIdFG)
c("FG \(mediaIdFG) newMedia \(newMedia)")
fulfill(newMedia)
})
} catch {
reject( Constants.createError("Realm Something went wrong!") )
}
}
} // return promise
} // func createOrUpdate
Posting my own answer to let you know my findings. I also found this helpful article about Swift's copy() aka objc's copyWithZone: https://www.hackingwithswift.com/example-code/system/how-to-copy-objects-in-swift-using-copy

How to detect Network Signal Strength in iOS Reachability

I am creating a new Traveling Application in iOS, this application is highly dependent on Maps and will include two Maps.
My first Map will work when the user has a strong Network Signal (Apple Maps).
My second Map will be used when their isn't any Network or really Low signal (Offline
MapBox).
Why do I have two different maps in one Application? My Application is a Direction App, so when the user has really low network or none it will go to the offline Map MapBox. Also the Apple Maps will have Yelp integration and not the offline Map MapBox.
So my Question: How can I detect the network signal in WiFi, 4G Lte, and 3G.
My original thought was to time the download of a file, and see how long it takes:
#interface ViewController () <NSURLSessionDelegate, NSURLSessionDataDelegate>
#property (nonatomic) CFAbsoluteTime startTime;
#property (nonatomic) CFAbsoluteTime stopTime;
#property (nonatomic) long long bytesReceived;
#property (nonatomic, copy) void (^speedTestCompletionHandler)(CGFloat megabytesPerSecond, NSError *error);
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
[self testDownloadSpeedWithTimout:5.0 completionHandler:^(CGFloat megabytesPerSecond, NSError *error) {
NSLog(#"%0.1f; error = %#", megabytesPerSecond, error);
}];
}
/// Test speed of download
///
/// Test the speed of a connection by downloading some predetermined resource. Alternatively, you could add the
/// URL of what to use for testing the connection as a parameter to this method.
///
/// #param timeout The maximum amount of time for the request.
/// #param completionHandler The block to be called when the request finishes (or times out).
/// The error parameter to this closure indicates whether there was an error downloading
/// the resource (other than timeout).
///
/// #note Note, the timeout parameter doesn't have to be enough to download the entire
/// resource, but rather just sufficiently long enough to measure the speed of the download.
- (void)testDownloadSpeedWithTimout:(NSTimeInterval)timeout completionHandler:(nonnull void (^)(CGFloat megabytesPerSecond, NSError * _Nullable error))completionHandler {
NSURL *url = [NSURL URLWithString:#"http://insert.your.site.here/yourfile"];
self.startTime = CFAbsoluteTimeGetCurrent();
self.stopTime = self.startTime;
self.bytesReceived = 0;
self.speedTestCompletionHandler = completionHandler;
NSURLSessionConfiguration *configuration = [NSURLSessionConfiguration ephemeralSessionConfiguration];
configuration.timeoutIntervalForResource = timeout;
NSURLSession *session = [NSURLSession sessionWithConfiguration:configuration delegate:self delegateQueue:nil];
[[session dataTaskWithURL:url] resume];
}
- (void)URLSession:(NSURLSession *)session dataTask:(NSURLSessionDataTask *)dataTask didReceiveData:(NSData *)data {
self.bytesReceived += [data length];
self.stopTime = CFAbsoluteTimeGetCurrent();
}
- (void)URLSession:(NSURLSession *)session task:(NSURLSessionTask *)task didCompleteWithError:(NSError *)error {
CFAbsoluteTime elapsed = self.stopTime - self.startTime;
CGFloat speed = elapsed != 0 ? self.bytesReceived / (CFAbsoluteTimeGetCurrent() - self.startTime) / 1024.0 / 1024.0 : -1;
// treat timeout as no error (as we're testing speed, not worried about whether we got entire resource or not
if (error == nil || ([error.domain isEqualToString:NSURLErrorDomain] && error.code == NSURLErrorTimedOut)) {
self.speedTestCompletionHandler(speed, nil);
} else {
self.speedTestCompletionHandler(speed, error);
}
}
#end
Note, this measures the speed including the latency of starting the connection. You could alternatively initialize startTime in didReceiveResponse, if you wanted to factor out that initial latency.
Having done that, in retrospect, I don't like spending time or bandwidth downloading something that has no practical benefit to the app. So, as an alternative, I might suggest a far more pragmatic approach: Why don't you just try to open a MKMapView and see how long it takes to finish downloading the map? If it fails or if it takes more than a certain amount of time, then switch to your offline map. Again, there is quite a bit of variability here (not only because network bandwidth and latency, but also because some map images appear to be cached), so make sure to set a kMaximumElapsedTime to be large enough to handle all the reasonable permutations of a successful connection (i.e., don't be too aggressive in using a low value).
To do this, just make sure to set your view controller to be the delegate of the MKMapView. And then you can do:
#interface ViewController () <MKMapViewDelegate>
#property (nonatomic, strong) NSDate *startDate;
#end
static CGFloat const kMaximumElapsedTime = 5.0;
#implementation ViewController
// insert the rest of your implementation here
#pragma mark - MKMapViewDelegate methods
- (void)mapViewWillStartLoadingMap:(MKMapView *)mapView {
NSDate *localStartDate = [NSDate date];
self.startDate = localStartDate;
double delayInSeconds = kMaximumElapsedTime;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
// Check to see if either:
// (a) start date property is not nil (because if it is, we
// finished map download); and
// (b) start date property is the same as the value we set
// above, as it's possible this map download is done, but
// we're already in the process of downloading the next
// map.
if (self.startDate && self.startDate == localStartDate)
{
[[[UIAlertView alloc] initWithTitle:nil
message:[NSString stringWithFormat:#"Map timed out after %.1f", delayInSeconds]
delegate:nil
cancelButtonTitle:#"OK"
otherButtonTitles:nil] show];
}
});
}
- (void)mapViewDidFailLoadingMap:(MKMapView *)mapView withError:(NSError *)error {
self.startDate = nil;
[[[UIAlertView alloc] initWithTitle:nil
message:#"Online map failed"
delegate:nil
cancelButtonTitle:#"OK"
otherButtonTitles:nil] show];
}
- (void)mapViewDidFinishLoadingMap:(MKMapView *)mapView
{
NSTimeInterval elapsed = [[NSDate date] timeIntervalSinceDate:self.startDate];
self.startDate = nil;
self.statusLabel.text = [NSString stringWithFormat:#"%.1f seconds", elapsed];
}
For Swift
class NetworkSpeedProvider: NSObject {
var startTime = CFAbsoluteTime()
var stopTime = CFAbsoluteTime()
var bytesReceived: CGFloat = 0
var speedTestCompletionHandler: ((_ megabytesPerSecond: CGFloat, _ error: Error?) -> Void)? = nil
func test() {
testDownloadSpeed(withTimout: 5.0, completionHandler: {(_ megabytesPerSecond: CGFloat, _ error: Error?) -> Void in
print("%0.1f; error = \(megabytesPerSecond)")
})
}
}
extension NetworkSpeedProvider: URLSessionDataDelegate, URLSessionDelegate {
func testDownloadSpeed(withTimout timeout: TimeInterval, completionHandler: #escaping (_ megabytesPerSecond: CGFloat, _ error: Error?) -> Void) {
// you set any relevant string with any file
let urlForSpeedTest = URL(string: "https://any.jpg")
startTime = CFAbsoluteTimeGetCurrent()
stopTime = startTime
bytesReceived = 0
speedTestCompletionHandler = completionHandler
let configuration = URLSessionConfiguration.ephemeral
configuration.timeoutIntervalForResource = timeout
let session = URLSession(configuration: configuration, delegate: self, delegateQueue: nil)
guard let checkedUrl = urlForSpeedTest else { return }
session.dataTask(with: checkedUrl).resume()
}
func urlSession(_ session: URLSession, dataTask: URLSessionDataTask, didReceive data: Data) {
bytesReceived += CGFloat(data.count)
stopTime = CFAbsoluteTimeGetCurrent()
}
func urlSession(_ session: URLSession, task: URLSessionTask, didCompleteWithError error: Error?) {
let elapsed = (stopTime - startTime) //as? CFAbsoluteTime
let speed: CGFloat = elapsed != 0 ? bytesReceived / (CGFloat(CFAbsoluteTimeGetCurrent() - startTime)) / 1024.0 / 1024.0 : -1.0
// treat timeout as no error (as we're testing speed, not worried about whether we got entire resource or not
if error == nil || ((((error as NSError?)?.domain) == NSURLErrorDomain) && (error as NSError?)?.code == NSURLErrorTimedOut) {
speedTestCompletionHandler?(speed, nil)
}
else {
speedTestCompletionHandler?(speed, error)
}
}
}
I believe a google search will help.
Look out for the following thread on StackOverflow—
iOS wifi scan, signal strength
iPhone signal strength
So, I don't think you can still do this without using private APIs.

Resources