I'm running some test on those two networking framework: MKNetworkKit and AFNetworking. And I am seeing that it is not easy to see the differences between the two libs. What are the major differences between the two?
MKNetworkKit:
Cache on disk included.
Frozen operations (offline requests can be queued to be executed when the network is back).
More lightweight.
AFNetworking:
More users and contributors.
Better documentation (clearer and more accessible).
UIImageView+AFNetworking for lazy image loading (possible in MKNK, but more painful).
Standard (Apple-like) coding style.
Better leverage of SDK objects (NSCoding compliant).
Great variety of extra features and extensions (e.g. network reachability, streaming multipart form requests, backgrounding support, etc).
Has a nice project logo. ;-)
Anything else?
Related
I'd like to migrate from AFNetworking to Alamofire in a grown app. Since the app is quite big we think a step by step migration would contain less risk.
Nevertheless we have some concerns regarding:
Sessions
Security (Pinning etc.)
Observers / Listeners
Queues
Caching
...
Does anyone has experience with mixing AFN and Alamofire in Swift apps? I am also grateful for reports of problems you faced running both frameworks in parallel.
Thanks
It's possible and I've done so many times. You should try it and see what issues you run into and ask specific questions about them.
May I suggest you should stick with AFNetworking. Having two frameworks that do the same thing isn't always desired event in short term. AlamoFire is better if we in Swift only project. If you have 90% ObjC, it's just not ideally. And the result could be a lot of hidden bugs.
I am trying to get the data of UserProfiles from SharePoint 2010 site using Objective-C within xCode.Now I am using the SOAP service in my project. Is anyone able to point me in the right direction here? Thank you....
You probably mean "iOS" or "Cocoa" instead of Xcode.
If possible, avoid SOAP. It's much easier to access a web service via REST and using JSON as transport format - and in 99.8% of all use cases, a RESTful web service and JSON will fulfill all your requirements up to 100%.
What you need to accomplish your task can be summarized into "networking development", which involves NSURLConnection (and related friend classes), and NSJSONSerialization and a few other system classes depending on your needs.
Unless you stay with a RESTful web service and JSON and moderate requirements, networking may become quickly complex. And it becomes unnecessarily complex when using SOAP. Possibly you may want to utilize a third party library which may help here.
I'm assuming you are already familiar with the basic major principles when programming in Objective-C and for Mac OS X and iOS. So, I would suggest to start with reading examples from the Apple docs involving networking and utilizing NSURLConnection (e.g. MVCNetworking).
It seems like there a bunch of ways to do networking in Cocoa: Webkit, NSUrl, CFNetwork, BSD Sockets. Is there any other APIs/Frameworks that are commonly used for networking? I'm trying to understand all the ways to do networking in Cocoa and learn each one's strength's and weaknesses.
As a related question, why would anyone use CFSocket? It seems that most things can be done with NSUrl or BSD Sockets. Is CFSocket commonly used in practice?
You can watch the WWDC videos Network Apps for iPhone (Part 1, Part 2) and Networking Best Practices where they suggest to use NSURLConnection for HTTP and HTTPS, the CFSocket/CFStream/NSStream family for other TCP networking, and of course WebKit if you just intend to render web content. They advice against using the low-level BSD sockets, unless you're writing a server. The higher level frameworks you use, the more things are taken care for you (from DNS resolution to cellular network management, authentication, encryption, run loop integration...) and the more it is integrated to the rest of the Cocoa framework.
For iOS, the best networking suite is AFNetworking. It is being actively developed and has everything you should need to work with any network for your project.
I need to implement basic RESTful functionality in my app e.g. post/get/delete + json parsing.
What would you suggest to use as a framework:
1) Resty http://projects.lukeredpath.co.uk/resty/ (ARC?)
2) http://restkit.org/ (ARC?)
What the advantage of each of them?
The reason I am asking is because yesterday i implemented some RESTful features using ASIHTTPRequest, but today I read is not being supported anymore :(
One more thing, I have heard there is a build in JSON support in iOS5 SDK (GCD Dispatcher or something like that), would it work for iOS4 client? (is it compile time or run-time dependency?), and can it do post/get requests?
Depends on your requirements.
If you just need some RESTful communication with a server then Resty isnt too bad (Though I never used it, it looks straightforward).
Restkit on the other hand is one powerful package because of 1 ultimate feature. Integration with core data. Restkit is able to parse Json responses, make them into objects and save these objects to core data with minimal coding out of the box. This makes it highly useful in case this is the type of functionality you are looking for.
I would encourage you to define the functionality you need and have a look at the frameworks. If it basic stuff, as you say, then one can argue that Restkit may be too powerful.
As for the inbuilt JSON parsing library, well it is way down the list in priority. These framework already includes a Json parser and they work pretty well. I would seriously consider the advantages before I go on to retrofit these packaged frameworks with a JSON parser of choice.
I have some data that I need to share between multiple services on multiple machines. Stuffing the data into a database or shuffling it over http won't work in this situation and ideally the different pieces of software will need to communicate with each other directly (or through one central coordinator that can send and receive).
Is it recommended to create and implement a network protocol or use some tool to do the communication?
If I did go the route of creating a protocol myself, it wouldn't have to be very complex. Under 10 different message types, but it would have to be re-implemented in a few different languages for this project, and support unicode. I have read plenty (and done some) with handling sockets, but don't have much knowledge in handling a protocol I create. Are there any good resources on this?
There are also things like ICE and RPC that look intresting. The limit of my experience is using ICE and XMLRPC for a few days each. Is this the better route to go? If so what tools are out there?
Recently I've been using Google Protocol Buffers for encoding and shipping data between different machines running software written in different languages. It is quite easy to do, and takes away a lot of the hassle of designing a custom protocol.
Without knowing what technologies and platforms you are dealing with, it's difficult to give you a very specific answer - so I'll try to give you some general feedback.
If the system(s) you are wishing to connect span more than a single platform and/or technology you are probably better using an existing transport mechanism and protocol to maximize the chance your base platform will already have a library (or multiple) to interact over it. Also, integrating security and other features in a stack with known behaviors is more likely to be documented (with examples floating around). RPC (and ICE, though I've less familiarity with it) has some useful capabilities, but it also requires a lot of control over the environment and security can be convoluted (particularly if you are passing objects between different languages).
With regards to avoiding polling, this is a performance related issue; there are design patterns which can help you to handle such things - if you understand how you need the system to work (e.g. the observer pattern - kind of a dont-call-us-we'll-call-you approach). The network environment you are playing in will dictate which options are actually viable (e.g. a local LAN will have different considerations from something which runs over a WAN or the internet). Factors like firewall tunneling, VPN traversal, etc. should play part in your final selected technology profile.
The only other major consideration (that I can think of just now... ;-)) would be to consider the type of data you need to pass about. Is it just text, or do you need to stream binary objects? Would an encoding format (like XML or JSON or bJSON) do the trick? You mention "less than ten message types" as part of the question, but is that the only information which would ever need to be communicated by the system?
Either way, unless the overhead of existing protocols is unacceptable you're better of leveraging established work 99% of the time. Creativity is great - but commercial projects usually benefit from well-known behaviors, even if not the coolest or slickest (kind of the "as long as it works..." approach).
hth!