I'm familiar with SOAP and Webservices for peer-to-peer or client/server messaging on other platforms. What is the Apple equivalent for messaging?
I'm trying to put together an application server that will manage a collaborative work environment. It will have an average of 200 and maximum of 1000 mobile clients on a LAN. For example, one Mac and 200 iPhones. I'm planning to use Apple's Enteprise approach to distribute our private (commercial) app to each of our iPhones clients.
The server can push configuration settings to each client. The server will also push small sets of data to the clients. The clients will perform tasks on this data locally, and eventually report back status to the server. The clients can request more data sets from the server. These exchanges can be managed asynchronously.
There is also be a need to have synchronous exchanges for critical processes. This is rare, but it is a requirement.
The data that needs to be exchanged is reasonably small. These client/server processes are time dependent, so performance is critical requirement.
The network won't be shared, but needs to be tightly controlled (and fast). E.g. it can be Bonjour if there is a compelling reason for that.
I'm looking for a recommendation on which components of the Mac/iOS SDKs to leverage. Please leave 3rd party software out of this discussion. I must know what Apple already offers and what gaps exist before considering outside software. Thanks.
If you are only going to be using Mac and iOS products then Bonjour is really your best bet. There are a ton of native ways to transmit data in either byte or object forms.
There were a couple great videos from WWDC 2012 showing what the new Xcode can do to create client and server applications. It is under NDA, but you may want to take a look there: https://developer.apple.com/videos/wwdc/2012/.
Also you said you did not want third party, any particular reason? Would you use an open source layer on top of Bonjour?
Related
As we heard this year's WWDC, at the end of 2016, Apple will make ATS mandatory for all developers who hope to submit their apps to the App Store. http://9to5mac.com/2016/06/15/ats-https-ios-apps/
http://www.cso.com.au/article/577197/apple-tells-ios-9-developers-use-https-exclusively/
It is understandable in cases where privacy, encryption (...) is a factor. But what about simple (news...) feeds, API-s whare it is not the case?
What about simple json or rss feeds? E.g. I have a very simple public json feed that can be called without any authorization, will it also need https? And what about simple RSS feeds? Huge majority of them communicates now via http. What about downloading image files from the web in an app?
Thanks in advance!
This is very rapidly becoming "the new normal." (Did you notice that even WikiPedia now uses https connections to their site?) Non-encrypted communications can be effortlessly intercepted, e.g. in the coffee shops and public places where so many people routinely find themselves. The problem is even more severe now that "free public WiFi" is available in "ordinary" stores and Wal-Marts, and people have their phones set to automatically connect to any of them. (People do not realize how insecure they are! But, they're learning ...)
The most appropriate solution, then, is to "encrypt everything." And so, this is what Apple is now mandating.
Yes, even "routine" communications, news-feeds and such. All of the traffic that passes through the airwaves will be encrypted.
Remember, also, that these techniques not only secure the communication, but are capable of identifying the sender and the receiver to one another through mutually-held certificates. (Web sites don't always use client-identification, although they can, and apps definitely should.) This, if used properly, will close a very big headache-hole for servers, because they now will know just who they are talking to. Client software can trust that they are talking to the right server, and that their communications are "received as tendered."
"Android or iOS or Windows or what-have-you," you should be doing this. Every mobile device implements SSL and possibly other encryption stacks. Do not send anything over air-waves "in the clear."
A "simple feed", in some country, can kill you. Protect your users, encrypt everything.
I'm designing some OSX/iOS apps that I'd like to share a resource to be hosted on a webserver. I would like to have some sort of web app or script that can store a list of subscribers, and to notify them when the resource is updated. (The obvious goal here is to avoid having every app poll the webserver for updates.)
The only trick here is that I'd like a significant number of clients (say, a dozen) to be subscribed for updates on a 24/7 basis. I'm not sure if it's a good idea for all of the clients to maintain a live connection... I imagine that many web service providers will be happy about their webserver maintaining a dozen persistent connections (especially if they're virtually always idle).
(Edit) I looked into the Apple Push Network Service (APNs), but it's not the right solution for my problem. APNs requires an Entrust SSL Certificate, and some heavy interaction with the Apple Push Network service. My project is much simpler and more lightweight: I just need a script that says, "Upon receiving data from Device A, push it out to Devices B/C/D" (presuming those devices are somehow accessible... either through a persistent connection or some other technique).
What's the absolute simplest way of providing this mechanism?
The "simplest way" probably means different things to different people. If you're not a fan of locking yourself into third party services then there's a veritable plethora of app frameworks and open source tools you could use to build something yourself. But this is hardly 'simple' if web app development isn't your strong point.
There are several 'off the shelf' services available to do real-time messaging on iOS: bear in mind I'm just listing the ones I know from memory, there are other alternatives. Pusher and PubNub both offer real-time messaging services for mobile apps, along with ready to go SDKs. You can interface with them to send messages bi-directionally via sockets (so similar to how APNS works, but with considerable more control).
You could use these services with your own device/user management system, or you could use a 'backend as a service' provider such as Parse or Stackmob - you may not need this step, it depends how complex your intended app/integration is.
XMPPFramework has a publish–subscribe module (for XEP-0060) which works with most XMPP servers. I've even adapted it to work with Chat Server which comes with Snow Leopard.
If you already have an XMPP server this might be worth doing; otherwise it's kind of a heavyweight solution.
I'm building a system with some remote desktop capabilities. The client is considered every computer which is sharing its desktop, the server is considered a central server with a database which receives the images of all the multiple desktops. On the client side, I would like to build two projects: A windows service application and a VCL forms application. Each client app would presumably be running under a different user account on the computer, so there might be multiple client apps running at once, and they all send their image into this client service, which relays them to the central server.
The service will be responsible for connecting to the server, sending the image, and receiving mouse/keyboard events. The application, which is running in the background, will connect to this service some how and transmit the screenshots into the service. The goal is that one service is running while multiple "clients" are able to connect to it and send their desktop image. This service will be connected to the "central server" which receives all these different screenshots from different "clients". The images will then be either saved and logged or re-directed to any "dashboard" which might be viewing that "client".
The question is through what method should I use to connect the client applications to the client service to send images? They will be running on the same computer. I will need both the abilities to send simple command packets as well as stream a chunk of an image. I was about to use the Indy components (TIdTCPServer etc.) but I'm sure there must be an easier and cleaner way to do it. I'm using the Indy components elsewhere in the projects too.
Here's a diagram of the overall system I'm aiming for - I'm just worried about the parts on the far right and far left - where the apps connect to the service within the same computer. As you can see, since there are many layers, I need to make sure whatever method(s) I use are powerful enough to accommodate for streaming massive amounts of image data.
Communicates among processes, you can use Pipe/Mailslots/Socket, I also think while sending a stream file Shared Memory maybe the most efficient way
I've done this a few times now, in a number of different configurations. The key to making it easy for me was using the RemObjects SDK which took care of the communications part. With a thread that controls its state, I can have a connection to a server or service that is reliable, and can transfer anything from a status byte through to transferring many megabytes of data (it is recommended that you use small chunks for large data so that you have more fine grained control over errors and flow). I now have a set of high reliability templates that I can deploy to make a new variation quite easily, and it can be updated with new function calls without much hassle (first thing I do is negotiate versions between the client and server so they know what they can support). Because it all works at a high level, my code is just making "function calls" and never worrying about what the format on the wire is. Likewise I can switch from their binary format to standard SOAP or other without changing the core logic. Finally, the connections can be local, to the same machine (I use this for end user apps talking to a background service) or to a machine on the LAN or internet. All in the same code.
We intend to design a system with three "tiers".
HQ, with a single server
lots of "nodes" on a regional basis
users, with iPads.
HQ communicates 2-way with the nodes which communciate 2-way with the users. Users never communicate with HQ nor vice-versa.
The powers that be decree a Windows app from HQ (using Delphi) and a native desktop app for the users' iPads. They have no opinion on the nodes.
If there are compelling technical arguments, I might be able to beat them down from "decree" to "prefer" on the Windows program (and, for isntance, make it browser based). The nodes have no GUI, they just sit there playing middle-man.
What's the best way for these things to communicate (SOAP/HTTP/AJAX/jQuery/home-brewed-protocol-on-top-of-TCP/something-else?) Is it best to use the same protocol end to end, or different protocols for hq<-->node and node<-->iPad?
Both ends of each of those two interfaces might wish to initiate a transaction (which I can easily do if I roll my own protocol), so should I use push/pull/long-poll or what?
I hope that this description makes sense. Please ask questions if it does not. Thanks.
Update:
File size is typcially below 1MB with nothing likely to be above 10MB or even 5MB. No second file will be sent before a first file is acknowledged.
Files flow "downhill" from HQ to node to iPad. Files will never flow "uphill", but there will be some small packets of data (in addition to acks) which are initiated by user action on the iPad. These will go to the local node and then to the HQ. We are probably talking <128 bytes.
I suppose there will also be general control & maintenance traffic at a low rate, in all directions.
For push / pull (publish / subscribe or peer to peer communication), cross-platform message brokers could be used. I am not sure if there are (iOS) client libraries for Microsoft Message Queue (MSMQ), but I would also evaluate open source solutions like HornetQ, Apache ActiveMQ, Apollo, OpenMQ, Apache QPid or RabbitMQ.
All these solutions provide a reliable foundation for distributed messaging, like failover, clustering, persistence, with high performance and many clients attached. On this infrastructure message with any content type (JSON, binary, plain text) can be exchanged, and on top messages can contain routing and priority information. They also support transacted messaging.
There are Delphi and Free Pascal client libraries available for many enterprise quality open source messaging products. (I am am the author of some of them, supporting ActiveMQ, Apollo, HornetQ, OpenMQ and RabbitMQ)
Check out MessagePack: http://msgpack.org/
Also, here's more RPC discussion on SO:
RPC frameworks available?
MessagePack: fast cross-platform serializer and RPC - please share experience
ICE might be of interest to you: http://zeroc.com/index.html
They have an iOS layer: http://zeroc.com/icetouch/index.html
IMHO there are too little requisites to decide what technology to use. What data are exchanged, how often, what size? Are there request/response time constraints? etc. etc. Never start selecting a technology before you understand your needs deeply.
I have heard that web-based chat clients tend to use networking frameworks such as the twisted framework.
But would it be possible to build a web-based chat client without a networking framework - using only ajax connections?
I would like to build a session-based one-to-one web chat client that uses sessions to indicate when a chat has ended. Would this be possible in Rails using only ajax and without a networking framework?
What effect does it have to use a networking framework and what impact would it have on my app to not use one? Also any general recommendations for approaching this project would be appreciated.
If i understand you correctly, you want to have to clients connect to you server and send messaged to each other to each other through ajax, via the server.
This is possible, there are two approaches to do this.
The easy approach is to have both client poll every few seconds to check for new messages posted by the other. Drawback is that the messages are not instantly delivered. I think this is an example found in the rails book.
The more complex approach is to keep an open connection and sent the messages to the client as soon as they are received by the server. To do this you can use something like Juggernaut
I would like to add that though the latter works, it is not something http was meant for and it a bit of hack, but hey, whatever gets the job done. A working example of this is the rails chat project which uses a juggernaut derivative.
Technically speaking every network based application has a networking framework under it and, therefore, is socket based...
The only real question here is whether you want to have all that chatter go through your server or allow point to point communication. If the former, you can use the ajax framework to talk to your web server. This means that all of your clients will be constantly polling the web server for updates.
If the later, then you have to allow direct tcp connections between the two clients and need to get a little closer to the metal so to speak.
So, ask yourself this: Do you want to pay for the traffic costs AND have potential liability over divulging whatever it is that people might be typing into their client; or, would you rather just build a chat program that people can use to talk to each other?
Of course, before even going that far, do you really want to build yet another chat client? That space is already pretty crowded.