how to use altimeter ? (increase & decreases method) - ios

I am a pilot and ios developer. I would like to know if it is possible to create two methods that can send notifications when the altitude increases and another when the altitude decreases (takeoff and landing). I have already created a code that can retrieve the altitude.
- (CMAltimeter *)altimeter
{
if (!_altimeter) {
_altimeter = [[CMAltimeter alloc] init];
}
return _altimeter;
}
if you want, I can share the project with Dropbox to show you my code.

Your code only creates a CMAltimeter instance.
To get altitude data, use startRelativeAltitudeUpdatesToQueue after checking if your device actually supports altimeter measurements, and send the notifications when you've detected a takeoff or landing in the callback:
if ([CMAltimeter isRelativeAltitudeAvailable]) {
CMAltimeter* altimeter = [[CMAltimeter alloc] init];
NSOperationQueue* queue = [[NSOperationQueue alloc] init];
[altimeter startRelativeAltitudeUpdatesToQueue:queue withHandler:^(CMAltitudeData* altitudeData, NSError* error) {
// your code here
}];
}

Few remarks:
You probably need to filter out altimeter signal noise using a low pass filter.
For sure you need to define a threshold value for altimeter changes, because you don't want to be triggered continuously at every 0.1m change.
The altimeter is a relative measurement. This means that you'd have to tell the app when you're on the ground; a kind of 0-level set.
Of course you can't use this in pressurised plane.
Plane speed probably influences the local pressure inside a non-pressurised plane.
Fuselage vibrations probably influences the local pressure.
#Geroen's answer shows how to get altimeter updates.
I think you should first make the app to just show the altimeter value on a large UILabel and see how that looks during a flight. This will give you an idea how messy the data is.

Related

MKDirections calculateETAWithCompletionHandler: in background state

I have an app which monitors significant location changes.
Upon receiving a new calculation I want to calculate the duration from the current location to a specified location.
To calculate the duration I use calculateETAWithCompletionHandler: from the MKDirections class.
Everything works as expected as long as the app is in the foreground.
When I send the app to the background, it is correctly receives location updates in the background and everything works until I call calculateETAWithCompletionHandler:, which will never return results.
MKDirectionsHandler, the completion handler of calculateETAWithCompletionHandler:. is never called when being in the background.
As soon as the app is coming into the foreground again, all the waiting completion handlers are receiving results.
MKMapItem* origin = [MKMapItem mapItemForCurrentLocation];
MKMapItem* destination = [[MKMapItem alloc] initWithPlacemark:destinationPlacemark];
MKDirectionsRequest* request = [MKDirectionsRequest new];
[request setSource:origin];
[request setDestination:destination];
[request setTransportType:MKDirectionsTransportTypeAutomobile];
MKDirections* directions = [[MKDirections alloc] initWithRequest:request];
[directions calculateETAWithCompletionHandler:^(MKETAResponse *response, NSError *error) {
completion(response.expectedTravelTime, error);
}];
Is calling calculateETAWithCompletionHandler: in the background not allowed?
Is there any way to resolve this issue?
I believe the way you are making use of MKMapItem is the problem, you need to run this on the main thread. So I don't think it will work for what you need. When collecting the location in the background you should use CoreLocation instead.
The documentation around MKDirection is not very specific on this issue, the most relevant section I could find was:
An MKDirections object provides you with route-based directions data from Apple servers. You can use instances of this class to get travel-time information or driving or walking directions based on the data in an MKDirectionsRequest object that you provide. The directions object passes your request to the Apple servers and returns the requested information to a block that you provide.
Since you are trying to calculate travel-time, it would appear that calculateETAWithCompletionHandler: tries to perform a network request to the apple servers. With the application being in a background state, the request is put on hold until the application enters foreground again.
Unfortunately I don't think there is an easy way around this. You could try and use a "guesstimation" approach where, before the application enters a background state it calculates the ETA for a user, and then while it is in the background it increases or decreases the ETA proportionally to the direct distance between your current location and the destination. Depending on how precise you want your results to be this broad estimation could be enough to satisfy your requirements.

`[AVCaptureSession canAddOutput:output]` returns NO intermittently. Can I find out why?

I am using canAddOutput: to determine if I can add a AVCaptureMovieFileOutput to a AVCaptureSession and I'm finding that canAddOutput: is sometimes returning NO, and mostly returning YES. Is there a way to find out why a NO was returned? Or a way to eliminate the situation that is causing the NO to be returned? Or anything else I can do that will prevent the user from just seeing an intermittent failure?
Some further notes: This happens approximately once in 30 calls. As my app is not launched, it has only been tested on one device: an iPhone 5 running 7.1.2
Here is quote from documentation (discussion of canAddOutput:)
You cannot add an output that reads from a track of an asset other than the asset used to initialize the receiver.
Explanation that will help you (Please check if your code is matching to this guide, if you're doing all right, it should not trigger error, because basically canAddOuput: checks the compatibility).
AVCaptureSession
Used for the connection between the organizations Device Input and output, similar to the connection of the DShow the filter. If you can connect the input and output, after the start, the data will be read from input to the output.
Several main points:
a) AVCaptureDevice, the definition of equipment, both camera Device.
b) AVCaptureInput
c) AVCaptureOutput
Input and output are not one-to-one, such as the video output while video + audio Input.
Before and after switching the camera:
AVCaptureSession * session = <# A capture session #>;
[session beginConfiguration];
[session removeInput: frontFacingCameraDeviceInput];
[session addInput: backFacingCameraDeviceInput];
[session commitConfiguration];
Add the capture INPUT:
To add a capture device to a capture session, you use an instance of AVCaptureDeviceInput (a concrete
subclass of the abstract AVCaptureInput class). The capture device input manages the device's ports.
NSError * error = nil;
AVCaptureDeviceInput * input =
[AVCaptureDeviceInput deviceInputWithDevice: device error: & error];
if (input) {
// Handle the error appropriately.
}
Add output, output classification:
To get output from a capture session, you add one or more outputs. An output is an instance of a concrete
subclass of AVCaptureOutput;
you use:
AVCaptureMovieFileOutput to output to a movie file
AVCaptureVideoDataOutput if you want to process frames from the video being captured
AVCaptureAudioDataOutput if you want to process the audio data being captured
AVCaptureStillImageOutput if you want to capture still images with accompanying metadata
You add outputs to a capture session using addOutput:.
You check whether a capture output is compatible
with an existing session using canAddOutput:.
You can add and remove outputs as you want while the
session is running.
AVCaptureSession * captureSession = <# Get a capture session #>;
AVCaptureMovieFileOutput * movieInput = <# Create and configure a movie output #>;
if ([captureSession canAddOutput: movieInput]) {
[captureSession addOutput: movieInput];
}
else {
// Handle the failure.
}
Save a video file, add the video file output:
You save movie data to a file using an AVCaptureMovieFileOutput object. (AVCaptureMovieFileOutput
is a concrete subclass of AVCaptureFileOutput, which defines much of the basic behavior.) You can configure
various aspects of the movie file output, such as the maximum duration of the recording, or the maximum file
size. You can also prohibit recording if there is less than a given amount of disk space left.
AVCaptureMovieFileOutput * aMovieFileOutput = [[AVCaptureMovieFileOutput alloc]
init];
CMTime maxDuration = <# Create a CMTime to represent the maximum duration #>;
aMovieFileOutput.maxRecordedDuration = maxDuration;
aMovieFileOutput.minFreeDiskSpaceLimit = <# An appropriate minimum given the quality
of the movie format and the duration #>;
Processing preview video frame data, each frame view finder data can be used for subsequent high-level processing, such as face detection, and so on.
An AVCaptureVideoDataOutput object uses delegation to vend video frames.
You set the delegate using
setSampleBufferDelegate: queue:.
In addition to the delegate, you specify a serial queue on which they
delegate methods are invoked. You must use a serial queue to ensure that frames are delivered to the delegate
in the proper order.
You should not pass the queue returned by dispatch_get_current_queue since there
is no guarantee as to which thread the current queue is running on. You can use the queue to modify the
priority given to delivering and processing the video frames.
Data processing for the frame, there must be restrictions on the size (image size) and the processing time limit, if the processing time is too long, the underlying sensor will not send data to the layouter and the callback.
You should set the session output to the lowest practical resolution for your application.
Setting the output
to a higher resolution than necessary wastes processing cycles and needlessly consumes power.
You must ensure that your implementation of
captureOutput: didOutputSampleBuffer: fromConnection: is able to process a sample buffer within
the amount of time allotted to a frame. If it takes too long, and you hold onto the video frames, AVFoundation
will stop delivering frames, not only to your delegate but also other outputs such as a preview layer.
Deal with the capture process:
AVCaptureStillImageOutput * stillImageOutput = [[AVCaptureStillImageOutput alloc]
init];
NSDictionary * outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG,
AVVideoCodecKey, nil];
[StillImageOutput setOutputSettings: outputSettings];
Able to support different format also supports directly generate jpg stream.
If you want to capture a JPEG image, you should typically not specify your own compression format. Instead,
you should let the still image output do the compression for you, since its compression is hardware-accelerated.
If you need a data representation of the image, you can use jpegStillImageNSDataRepresentation: to
get an NSData object without re-compressing the data, even if you modify the image's metadata.
Camera preview display:
You can provide the user with a preview of what's being recorded using an AVCaptureVideoPreviewLayer
object. AVCaptureVideoPreviewLayer is a subclass of CALayer (see Core Animation Programming Guide. You don't need any outputs to show the preview.
AVCaptureSession * captureSession = <# Get a capture session #>;
CALayer * viewLayer = <# Get a layer from the view in which you want to present the
The preview #>;
AVCaptureVideoPreviewLayer * captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer
alloc] initWithSession: captureSession];
[viewLayer addSublayer: captureVideoPreviewLayer];
In general, the preview layer behaves like any other CALayer object in the render tree (see Core Animation
Programming Guide). You can scale the image and perform transformations, rotations and so on just as you
would any layer. One difference is that you may need to set the layer's orientation property to specify how
it should rotate images coming from the camera. In addition, on iPhone 4 the preview layer supports mirroring
(This is the default when previewing the front-facing camera).
Referring from this answer, there might be a possibility that this delegate method may be running in the background, which causes the previous AVCaptureSession not disconnected properly sometimes resulting in canAddOutput: to be NO sometimes.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection
The solution might be to use stopRunning in the above delegate(Of course after doing necessary actions and condition checks, you need to finish off your previous sessions properly right?).
Adding on to that, It would be better if you provide some code of what you are trying to do.
It's can be one from this 2 cases
1) Session is running
2) You already added output
You can't add 2 output or 2 input, and also you can't create 2 different sessions
It may be a combination of:
Calling this method when the camera is busy.
Not properly removing your previously connected AVCaptureSession.
You should try to only add it once (where I guess canAddOutput: will always be YES) and just pause/resume your session as needed:
// Stop session if possible
if (_captureSession.running && !_captureInProgress)
{
[_captureSession stopRunning];
NBULogVerbose(#"Capture session: {\n%#} stopped running", _captureSession);
}
You can take a look here.
I think this will help you
canAddOutput:
Returns a Boolean value that indicates whether a given output can be added to the session.
- (BOOL)canAddOutput:(AVCaptureOutput *)output
Parameters
output
An output that you want to add to the session.
Return Value
YES if output can be added to the session, otherwise NO.
Availability
Available in OS X v10.7 and later.
Here is the link for apple doc Click here

Using ReactiveCocoa to track UI updates with a remote object

I'm making an iOS app which lets you remotely control music in an app playing on your desktop.
One of the hardest problems is being able to update the position of the "tracker" (which shows the time position and duration of the currently playing song) correctly. There are several sources of input here:
At launch, the remote sends a network request to get the initial position and duration of the currently playing song.
When the user adjusts the position of the tracker using the remote, it sends a network request to the music app to change the position of the song.
If the user uses the app on the desktop to change the position of the tracker, the app sends a network request to the remote with the new position of the tracker.
If the song is currently playing, the position of the tracker is updated every 0.5 seconds or so.
At the moment, the tracker is a UISlider which is backed by a "Player" model. Whenever the user changes the position on the slider, it updates the model and sends a network request, like so:
In NowPlayingViewController.m
[[slider rac_signalForControlEvents:UIControlEventTouchUpInside] subscribeNext:^(UISlider *x) {
[playerModel seekToPosition:x.value];
}];
[RACObserve(playerModel, position) subscribeNext:^(id x) {
slider.value = player.position;
}];
In PlayerModel.m:
#property (nonatomic) NSTimeInterval position;
- (void)seekToPosition:(NSTimeInterval)position
{
self.position = position;
[self.client newRequestWithMethod:#"seekTo" params:#[positionArg] callback:NULL];
}
- (void)receivedPlayerUpdate:(NSDictionary *)json
{
self.position = [json objectForKey:#"position"]
}
The problem is when a user "fiddles" with the slider, and queues up a number of network requests which all come back at different times. The user could be have moved the slider again when a response is received, moving the slider back to a previous value.
My question: How do I use ReactiveCocoa correctly in this example, ensuring that updates from the network are dealt with, but only if the user hasn't moved the slider since?
In your GitHub thread about this you say that you want to consider the remote's updates as canonical. That's good, because (as Josh Abernathy suggested there), RAC or not, you need to pick one of the two sources to take priority (or you need timestamps, but then you need a reference clock...).
Given your code and disregarding RAC, the solution is just setting a flag in seekToPosition: and unsetting it using a timer. Check the flag in recievedPlayerUpdate:, ignoring the update if it's set.
By the way, you should use the RAC() macro to bind your slider's value, rather than the subscribeNext: that you've got:
RAC(slider, value) = RACObserve(playerModel, position);
You can definitely construct a signal chain to do what you want, though. You've got four signals you need to combine.
For the last item, the periodic update, you can use interval:onScheduler::
[[RACSignal interval:kPositionFetchSeconds
onScheduler:[RACScheduler scheduler]] map:^(id _){
return /* Request position over network */;
}];
The map: just ignores the date that the interval:... signal produces, and fetches the position. Since your requests and messages from the desktop have equal priority, merge: those together:
[RACSignal merge:#[desktopPositionSignal, timedRequestSignal]];
You decided that you don't want either of those signals going through if the user has touched the slider, though. This can be accomplished in one of two ways. Using the flag I suggested, you could filter: that merged signal:
[mergedSignal filter:^BOOL (id _){ return userFiddlingWithSlider; }];
Better than that -- avoiding extra state -- would be to build an operation out of a combination of throttle: and sample: that passes a value from a signal at a certain interval after another signal has not sent anything:
[mergedSignal sample:
[sliderSignal throttle:kUserFiddlingWithSliderInterval]];
(And you might, of course, want to throttle/sample the interval:onScheduler: signal in the same way -- before the merge -- in order to avoid unncessary network requests.)
You can put this all together in PlayerModel, binding it to position. You'll just need to give the PlayerModel the slider's rac_signalForControlEvents:, and then merge in the slider value. Since you're using the same signal multiple places in one chain, I believe that you want to "multicast" it.
Finally, use startWith: to get your first item above, the inital position from the desktop app, into the stream.
RAC(self, position) =
[[RACSignal merge:#[sampledSignal,
[sliderSignal map:^id(UISlider * slider){
return [slider value];
}]]
] startWith:/* Request position over network */];
The decision to break each signal out into its own variable or string them all together Lisp-style I'll leave to you.
Incidentally, I've found it helpful to actually draw out the signal chains when working on problems like this. I made a quick diagram for your scenario. It helps with thinking of the signals as entities in their own right, as opposed to worrying about the values that they carry.

Set an initial focal distance on iOS

I'm working on an iOS-app where one of the features is scanning QR-codes. For this I'm using the excellent library, ZBar. The scanning works fine and is generally really quick. However when you use smaller QR-codes it takes a bit longer to scan, mostly due to the fact that the autofocus needs some time to adjust. I was experimenting and noticed that the focus could be locked using the following code:
AVCaptureDevice *cameraDevice = readerView.device;
if ([cameraDevice lockForConfiguration:nil]) {
[cameraDevice setFocusMode:AVCaptureFocusModeLocked];
[cameraDevice unlockForConfiguration];
}
When this code is used after a successful scan, the coming scans are really quick. That made me wonder, could I somehow lock the focus before even scanning one code? The app will only scan rather small QR-codes so there will never be a need for focusing on something far away. Sure, I could implement something like tap to focus, but preferably I would like to avoid that extra step.
Is there a way to achieve this? Or are there maybe another way of speeding things up when dealing with smaller QR-codes?
// Alexander
In iOS7 this is now possible!
Apple has added the property autoFocusRangeRestriction to the AVCaptureDevice class. This property is of the enum AVCaptureAutoFocusRangeRestriction which has three different values:
AVCaptureAutoFocusRangeRestrictionNone - Default, no restrictions
AVCaptureAutoFocusRangeRestrictionNear - The subject that matters is close to the camera
AVCaptureAutoFocusRangeRestrictionFar - The subject that matters is far from the camera
To check if the method is available we should first check if the property autoFocusRangeRestrictionSupported is true. And since it's only supported in iOS7 an onwards we should also use respondsToSelector so we don't get an exception on earlier iOS-versions.
So the resulting code should look something like this:
AVCaptureDevice *cameraDevice = zbarReaderView.device;
if ([cameraDevice respondsToSelector:#selector(isAutoFocusRangeRestrictionSupported)] && cameraDevice.autoFocusRangeRestrictionSupported) {
// If we are on an iOS version that supports AutoFocusRangeRestriction and the device supports it
// Set the focus range to "near"
if ([cameraDevice lockForConfiguration:nil]) {
cameraDevice.autoFocusRangeRestriction = AVCaptureAutoFocusRangeRestrictionNear;
[cameraDevice unlockForConfiguration];
}
}
This seems to somewhat speed up the scanning of small QR-codes according to my initial tests :)
Update - iOS8
With iOS8, Apple has given us lots of new camera API's to play with. One of this new methods is this one:
- (void)setFocusModeLockedWithLensPosition:(float)lensPosition completionHandler:(void (^)(CMTime syncTime))handler
This method locks focus by moving the lens to a position between 0.0 and 1.0. I played around with the method, locking the lens at close values. However, in general it caused more problems then it solved. You had to keep the QR-codes/barcodes at a very specific distance, which could cause issues when you had codes of different sizes.
But. I think I have found a pretty good alternative to locking focus altogether. When the user press the scan button, I lock the lens to a close distance, and when it's finished I switch the camera back to auto focus. This gives us the benefits of keeping auto focus on, but forces the camera to begin at a close distance where a QR-code/barcode is likely to be found. This in combination with:
cameraDevice.autoFocusRangeRestriction = AVCaptureAutoFocusRangeRestrictionNear;
And:
cameraDevice.focusPointOfInterest = CGPointMake(0.5,0.5);
Results in a pretty snappy scanner.
I also built a custom scanner with the API's introduced in iOS7, instead of using ZBar. Mostly because the ZBar-libs are quite outdated and as when iPhone 5 introduced ARMv7s I now had to recompile it again for ARM64.
// Alexander
iOS 8 recently added this configuration! It is almost like they read stack overflow
/*!
#method setFocusModeLockedWithLensPosition:completionHandler:
#abstract
Sets focusMode to AVCaptureFocusModeLocked and locks lensPosition at an explicit value.
#param lensPosition
The lens position, as described in the documentation for the lensPosition property. A value of AVCaptureLensPositionCurrent can be used
to indicate that the caller does not wish to specify a value for lensPosition.
#param handler
A block to be called when lensPosition has been set to the value specified and focusMode is set to AVCaptureFocusModeLocked. If
setFocusModeLockedWithLensPosition:completionHandler: is called multiple times, the completion handlers will be called in FIFO order.
The block receives a timestamp which matches that of the first buffer to which all settings have been applied. Note that the timestamp
is synchronized to the device clock, and thus must be converted to the master clock prior to comparison with the timestamps of buffers
delivered via an AVCaptureVideoDataOutput. The client may pass nil for the handler parameter if knowledge of the operation's completion
is not required.
#discussion
This is the only way of setting lensPosition.
This method throws an NSRangeException if lensPosition is set to an unsupported level.
This method throws an NSGenericException if called without first obtaining exclusive access to the receiver using lockForConfiguration:.
*/
- (void)setFocusModeLockedWithLensPosition:(float)lensPosition completionHandler:(void (^)(CMTime syncTime))handler NS_AVAILABLE_IOS(8_0);
EDIT: this is a method of AVCaptureDevice

(iOS) Offline Sync DB - Server

Trying to implement an app which sends offline data stored on local db to web server when connected to internet. I use the code shown below. As far I have tested it works fine, not sure it will work fine for huge number of records. I would like to know whether any tweaking on this code may increase the performance???
NOTE
I know this would be a worst code for offline sync purpose, so trying
to tweak it better.
Its a single way synchronization, from app to server.
-(void)FormatAnswersInJSON {
DMInternetReachability *checkInternet = [[DMInternetReachability alloc] init];
if ([checkInternet isInternetReachable]) {
if ([checkInternet isHostReachable:#"www.apple.com"]) {//Change to domain
responseArray = [[NSMutableArray alloc] init];
dispatch_async(backgroundQueue, ^(void) {
NSArray *auditIDArray = [[NSArray alloc] initWithArray: [self getUnuploadedIDs]];
for (int temp = 0; temp < [auditIDArray count]; temp ++) {
// Code to post JSON to server
NSURLResponse *response;
NSData *urlData=[NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error];
if (!error) {
NSString *responseID = [[NSString alloc]initWithData:urlData encoding:NSUTF8StringEncoding];
if ([responseID isEqualToString:#"ERROR"]) {
//Error uploading records
} else {
[responseArray addObject:responseID];
}
} else {
//Error
return;
}
}
dispatch_async( backgroundQueue, ^{
/* Based on return code update local DB */
for (int temp = 0; temp < [responseArray count]; temp ++) {
[self updateRecordsForID:[auditIDArray objectAtIndex:temp] withID:[responseArray objectAtIndex:temp]];
}
});
});
}
}
}
- (void)upload { //Called when internet connection available
if(backgroundQueue){
dispatch_suspend(backgroundQueue);
dispatch_release(backgroundQueue);
backgroundQueue = nil;
}
backgroundQueue = dispatch_queue_create("com.XXXX.TestApp.bgqueue", NULL);
dispatch_async(backgroundQueue, ^(void) {
[self FormatAnswersInJSON];
});
}
If this code were sitting in front of me, my approach would be:
Look at the use cases and define 'huge number of records': Will 50 record updates at a time occur regularly? Or will it be in 1s and 2s? Do my users have wifi connections or is it over the paid network?, etc.
If possible, test in the wild. If my user base was small enough, gather real data and let that guide my decisions, or only release the feature to a subset of users/beta tests and measure.
If the data tells you to, then optimize this code to be more efficient.
My avenue of optimization would be doing group processing. The rough algorithm would be something like:
for records in groups of X
collect
post to server {
on return:
gather records that updated successfully
update locally
}
This assumes you can modify the server code. You could do groups of 10, 20, 50, etc. all depends on the type of data being sent, and the size.
A group algorithm means a bit more pre-processing client side, but has the pro of reducing HTTP requests. If you're only ever going to get a small number of updates, this is YAGNI and pre-mature optimization.
Don't let this decision keep you from shipping!
Your code has a couple of issues. One convention is to always check the return value before you test the error parameter. The error parameter might be set - even though the method succeeded.
When using NSURLConnection for anything else than a quick sample or test, you should also always use the asynchronous style with handling the delegate methods. Since using NSURLConnection properly may become quickly cumbersome and error prone, I would suggest to utilize a third party framework which encapsulates a NSURLConnection object and all connection related state info as a subclass of NSOperation. You can find one example implementation in the Apple samples: QHTTPOperation. Another appropriate third party framework would be AFNetworking (on GitHub).
When you use either the async style with delegates or a third party subclass, you can cancel the connection, retrieve detailed error or progress information, perform authentication and much more - which you can't with the synchronous API.
I think, once you have accomplished this and your approach works correctly, you may test whether the performance is acceptable. But unless you have large data - say >2 MByte - I wouldn't worry too much.
If your data becomes really large, say >10 MByte you need to consider to improve your approach. For example, you could provide the POST data as file stream instead a NSData object (see NSURLRequest's property HTTPBodyStream). Using a stream avoids to load all the POST data into RAM which helps alleviate the limited RAM problem.
If you have instead smaller POST data, but possibly many of them, you might consider to use a NSOperationQueue where you put your NSOperation connection subclass. Set the maximum number of concurrent operations to 2. This then may leverage HTTP pipelining - if the server supports this, which in effect reduces latency.
Of course, there might be other parts in your app, for example you create or retrieve the data which you have to send, which may affect the overall performance. However, if your code is sound and utilizes dispatch queues or NSOperations which let things perform in paralel there aren't many more options to improve the performance of the connection.

Resources