Should an iphone app communicate directly with a cassandra backend? - ios

Obviously there are multiple steps and phases of implementing such a thing.
I was thinking I would eventually have a webserver that takes http json requests from the ios app, and then queries the cassandra backend and sends results back. I could load balance and all that fancy stuff still, and also provide a logical layer on server side, and keep the client app lightweight.
I'm not sure i understand how cassandra clients fit though. It seems like the cassandra objective c client could eliminate the need for the above approach.
I saw another question and answer but it wasnt clear, perhaps because it varys on the need.

An iPhone app should not directly connect to a Cassandra backend or any other DB store.
First of all, talking to a database often requires adapting a very specific binary protocol (for Cassandra in particular, binary CQL or Thrift). Writing an adapter that would let your Objective-C app communicate in this binary protocol is a major piece of work, and could easily cost more than the rest of your app in effort. If you hide the DB behind a web-server, however, you will be able to select from a variety of existing adapters available in different server-side languages, meaning that you don't need to redo all that low-level work. You'll only be responsible for a relatively small piece of server-side code that would translate your REST queries and forward them to one of the Cassandra adapters (which expose easy-to-use interfaces).
Secondly, if you wanted to connect to a remote database from the phone, your database server would have to open its ports to the internet at large, which is a very bad security practice, even if you use SSL and user credentials. Again, if you hide behind a web server, you will be putting in a layer of technology that has evolved for decades to remain secure on the public internet.
Finally, having your phone talk to Cassandra directly is a poor architectural pattern. When you write apps that communicate on the internet, you want them to know as little as possible about each other, only how to talk to each other (preferably in a standard protocol). That way you can replace or upgrade individual components while keeping everything else the same. This may not sound like a lot, but is actually the main reason why phones, or web browsers, don't directly talk to databases. (If this setup were a good idea in principle, the first two problems could be easily solved given enough engineering effort.)
The approach you first suggested with JSON and the web server is the only correct way to go.

Use something like RESTful API, there are many reasons for that.
if your servers ip addresses change you have to update all client, if you add more nodes you will need to update all clients, if you decide to upgrade your cassandra and some functions change your clients will break and you need to update all clients.

Related

FoundationDB, the layer: Is it hosted on client application or server nodes?

Recently I was reading about concept of layers in FoundationDB. I like their idea, the decomposition of storage from one side and access to it from other.
There are some unclear points regarding implementation of the layers. Especially how they communicate with the storage engine. There are two possible answers: they are parts of server nodes and communicate with the storage by fast native API calls (e.g. as linked modules hosted in the server process) -OR- hosted inside client application and communicate through network protocol. For example, the SQL layer of many RDBMS is hosted on the server. And how are things with FoundationDB?
PS: These two cases are different from the performance view, especially when the clinent-server communication is high-latency.
To expand on what Eonil said: the answer rests on the distinction between two different sense of "client" and "server".
Layers are not run within the database server processes. They use the FDB client API to make requests of the database, and do not (with one exception*) get to pierce the transactional key-value abstraction.
However, there is nothing stopping your from running the layers on the same physical (or virtual) server machines as the database server processes. And, as that post from the community site mentions, there are use cases where you might very much wish to do this in order to minimize latencies.
*The exception is the Locality API, which is mostly useful in exactly those cases where you want to co-locate client-side layers with the data on which they operate.
Layers are on top of client-side library feature.
Cited from http://community.foundationdb.com/questions/153/what-layers-do-you-want-to-see-first
That's a good question. One reason that it doesn't always make sense
to run layers on the server is that in a distributed database, that
data is scattered--the servers themselves are a network hop away from
a random piece of data, just like the client.
Of course, for something like an analytics layer which is aware of
what data each server contains, it makes sense to run a distributed
version co-located with each of the machines in the FDB cluster.

Communication architecture choice in an IOS / Linux application?

I have a software architecture problem.
I have to design an IOS application which will communicate with a Linux application to get the state of a sensor, and to publish an actuator command. The two applications run in a Local network with an Ad-Hoc WiFi connection between the IOS device and the Linux computer.
So I have to synchronize two values between two applications (as described in figure 1). In a Linux/Linux system, I resolve this kind of problem thanks to any publisher / subscriber middleware. But how can I solve this problem in an IOS / Linux world ?
Actually the Linux application embed an asynchronous TCP Server, and the IOS application is an asynchronous TCP client. Both applications communicate through the TCP Socket. I think that this method is a low level method, and I would like to migrate the communication layer to a much higher level Service based communication framework.
After some bibliographic research I found three ways to resolve my problem :
The REST Way :
I can create a RESTful Web Service which modelize the sensor state, and which is able to send command to the actuator. An implementation of a RESTful web service client exists for IOS, that is "RESTKit", and I think I can use Apache/Axis2 on the server side.
The RPC Way :
I can create on my Linux computer a RPC service provider thanks to the libmaia. On the IOS side, I can use xmlrpc (https://github.com/eczarny/xmlrpc). My two programs will communicate thanks to the service described in the figure below.
The ZeroConf way :
I didn't get into detail of this methods, but I suppose I can use Bonjour on the IOS side, and AVAHI on the linux side. And then create custom service like in RPC on both side.
Discussion about these methods :
The REST way doesn't seem to be the good way because : "The REST interface is designed to be efficient for large-grain hypermedia data transfer" (from the Chapter 5 of the Fielding dissertation). My data are very fined grain data, because my command is just a float, and my sensor state too.
I think there is no big difference between the ZeroConf way and the RPC Way. ZeroConf provide "only" the service discovering mechanism, and I don't need this kind of mechanism because my application is a rigid application. Both sides knows which services exists.
So my question are :
Does XML RPC based method are the good choice to solve my problem of variable synchronization between an iPhone and a Computer ?
Does it exist other methods ?
I actually recommend you use "tcp socket + protobuf" for your application.
Socket is very efficient in pushing messages to your ios app and protobuf can save your time to deliver a message instead of character bytes. Your other high level proposal actually introduces more complications...
I can provide no answers; just some things to consider in no particular order.
I am also assuming that your model is that the iOS device polls the server to synchronize state.
It is probably best to stay away from directly using Berkeley sockets on the iOS device. iOS used to have issues with low level sockets not connecting after a period of inactivity. At the very least I would use NSStream or CFStream objects for transport or, if possible, I'd use NSURL, NSURLConnection, NSURLRequest. NSURLConnection's asynchronous data loading capability fits well with iOS' gui update loop.
I think you will have to implement some form of data definition language independent of your implementation method (RES, XML RPC, CORBA, roll your own, etc.)
The data you send and receive over the wire would probably be XML or JSON. If you use XML you would have to write your own XML document handler as iOS implements the NSXMLParser class but not the NSXMLDocument class. I would refer JSON as the JSON parser will return an NSArray or NSDictionary hierarchy of NSObjects containing the unserialized data.
I have worked on a GSOAP implementation that used CFStreams for transport. Each request and response was handled by a request specific class to create request specific objects. Each new request required a new class definition for the returned data. Interactivity was maintained by firing the requests through an NSOperationQueue. Lots of shim here. The primary advantage of this method was that the interface was defined in a wsdl schema (all requests, responses, and data structures were defined in one place.
I have not looked at CORBA on iOS - you would have to tie in C++ libraries to your code and change the transport to use CFStreams Again, lots of shim but the advantage of having the protocol defined in the idl file. Also you would have a single connection to the server instead of making and breaking TCP connections for each request.
My $.02
XML RPC and what you refer to as "RESTful Web Service" will both get the job done. If you can use JSON instead of XML as the payload format, that would simplify things somewhat on the iOS side.
Zeroconf (aka bonjour) can be used in combination with either approach. In your case it would allow the client to locate the server dynamically, as an alternative to hard-coding an URL or other address in the client. Zeroconf doesn't play any role in actual application-level data transfer.
You probably want to avoid having the linux app call the iOS app, since that will complicate the iOS app a lot, plus it will be hard on the battery.
You seem to have cherry picked some existing technologies and seem to be trying to make them fit the problem.
I would like to migrate the communication layer to a much higher level Service based communication framework
Why?
You should be seeking the method which meets your requirements in terms of available resources (should you assume that the client can maintain a consistent connection? how secure does it need to be?) However besides functionality, availability and security, the biggest concern should be how to implement this with the least amount of effort.
I'd be leaning towards the REST aproach because:
I do a lot of web development so that's where my skills lie
it has minimal dependencies
there is well supported code implementing the protocol stack at both ends
it's trivial to replace either end of the connection to test out the implementation
it's trivial to monitor the communications (if they're not encrypted) to test the implementaiton
adding encryption / authentication does not change the data exchange
Regards your citation, no HTTP is probably not the most sensible for SCADA - but then neither is iOS.

Websocket scalability, broadcasting concerns

If you have a complex requirement set with many users(&servers) how will your websocket infrastructure (server[s]) will scale, especially with broadcasting?
Of course, broadcasting is not part of the any websocket spec but it's there even in basic chat examples (a.k.a. hello world for websocket).
Client side (asking for new data) solution still seems more scalable than server side (broadcasting) solution with websockets' low latency and relatively cheap (http headerless) nature.
Edit:
OK, just think that you want to replace all your ajax code with websocket implementations which may mean that so many connections within so many different contexts. This adds enormous complexity to your system if you want to keep track of every possible scenario for broadcasting.
Low (network/thread etc) level implementation suggestions are also part of the problem not the solution, because this means you have to code a special server unlike general http servers.
Moreover, broadcasting brings some sort of stateful nature to the table which can't easily scale. Think about adding more servers and load balancing.
Scaling realtime web solutions can be a complex problem but one that services like Pusher (who I work for) have solved, and one that there are most definitely solutions defined for self hosted realtime web solutions - the PubSub paradigm is well understood and has been solved many times and in order to solve the problem there needs to be some state (who is subscribing to what). This paradigm is used in broadcasting the the types of scenarios that you are talking about.
Realtime web technologies have been built with large amounts of simultaneous connections in mind - many from the ground up. If you wanted to create a scalable solution you would most likely use an existing realtime web server that supports WebSockets, in the same way that it's highly unlikely that you would decide to implement your own HTTP Server you are unlikely to want to implement your own server which supports WebSockets from scratch.
Dedicated Realtime web servers also let you separate your application logic from your realtime communication mechanism (separation of concerns). Your application might need to maintain some state but the realtime technology deals with managing subscriptions and connections. How communication between the application and the realtime web technology is achieved is up to you but frequently messages queues are used and specifically redis is very popular in this space.
HTTP polling may conceptually be easier to understand - you can maintain statelessness and with each HTTP poll request you specify exactly what you are looking for. But it most definitely means that you will need to start scaling much sooner (adding more resource to handle the load).
WebSocket polling is something I've not considered before and I don't think I've seen it suggested anywhere before either; the idea that the client should say "I'm ready for my next set of data and here's what I want" is an interesting one. WebSockets have generally taken a leap away from the request/response paradigm but there may be scenarios where the increased efficiency of WebSockets and request/response using them may have some benefits. The SocketStream application framework might be worth a look as it might be relevant; after the initial application load all communication is performed over WebSockets which means that event basic request/response functionality uses WebSockets.
However, since we are talking about broadcasting data we need to go back to the PubSub paradigm where it makes much more sense to have active subscriptions and when new data is available that new data is distributed to those active subscriptions (pushed). All your application needs to know is if there are any active subscriptions or not in order to decide whether to publish the data or not. That problem has been solved.
The idea of websockets is that you keep a persistent connection with each client. When there is new data that you want to send to every client, you already know who all the clients are so you should just send it.
It sound like you want each client to constantly be sending requests to the server for new data. Why? It seems like that would waste everyone's bandwidth and I don't know why you think it will be more scalable. Maybe you could add more detail to your question like what kind of information you are broadcasting, how often, how many bytes, how many clients, etc.
Why not just consider an open websocket connection to be like a standing request from the client for more data?

Is it possible to build a web-based chat client without a socket-based framework?

I have heard that web-based chat clients tend to use networking frameworks such as the twisted framework.
But would it be possible to build a web-based chat client without a networking framework - using only ajax connections?
I would like to build a session-based one-to-one web chat client that uses sessions to indicate when a chat has ended. Would this be possible in Rails using only ajax and without a networking framework?
What effect does it have to use a networking framework and what impact would it have on my app to not use one? Also any general recommendations for approaching this project would be appreciated.
If i understand you correctly, you want to have to clients connect to you server and send messaged to each other to each other through ajax, via the server.
This is possible, there are two approaches to do this.
The easy approach is to have both client poll every few seconds to check for new messages posted by the other. Drawback is that the messages are not instantly delivered. I think this is an example found in the rails book.
The more complex approach is to keep an open connection and sent the messages to the client as soon as they are received by the server. To do this you can use something like Juggernaut
I would like to add that though the latter works, it is not something http was meant for and it a bit of hack, but hey, whatever gets the job done. A working example of this is the rails chat project which uses a juggernaut derivative.
Technically speaking every network based application has a networking framework under it and, therefore, is socket based...
The only real question here is whether you want to have all that chatter go through your server or allow point to point communication. If the former, you can use the ajax framework to talk to your web server. This means that all of your clients will be constantly polling the web server for updates.
If the later, then you have to allow direct tcp connections between the two clients and need to get a little closer to the metal so to speak.
So, ask yourself this: Do you want to pay for the traffic costs AND have potential liability over divulging whatever it is that people might be typing into their client; or, would you rather just build a chat program that people can use to talk to each other?
Of course, before even going that far, do you really want to build yet another chat client? That space is already pretty crowded.

Best practice for rate limiting users of a REST API?

I am putting together a REST API and as I'm unsure how it will scale or what the demand for it will be, I'd like to be able to rate limit uses of it as well as to be able to temporarily refuse requests when the box is over capacity or if there is some kind of slashdotted scenario.
I'd also like to be able to gracefully bring the service down temporarily (while giving clients results that indicate the main service is offline for a bit) when/if I need to scale the service by adding more capacity.
Are there any best practices for this kind of thing? Implementation is Rails with mysql.
This is all done with outer webserver, which listens to the world (i recommend nginx or lighttpd).
Regarding rate limits, nginx is able to limit, i.e. 50 req/minute per each IP, all over get 503 page, which you can customize.
Regarding expected temporary down, in rails world this is done via special maintainance.html page. There is some kind of automation that creates or symlinks that file when rails app servers go down. I'd recommend relying not on file presence, but on actual availability of app server.
But really you are able to start/stop services without losing any connections at all. I.e. you can run separate instance of app server on different UNIX socket/IP port and have balancer (nginx/lighty/haproxy) use that new instance too. Then you shut down old instance and all clients are served with only new one. No connection lost. Of course this scenario is not always possible, depends on type of change you introduced in new version.
haproxy is a balancer-only solution. It can extremely efficiently balance requests to app servers in your farm.
For quite big service you end-up with something like:
api.domain resolving to round-robin N balancers
each balancer proxies requests to M webservers for static and P app servers for dynamic content. Oh well your REST API don't have static files, does it?
For quite small service (under 2K rps) all balancing is done inside one-two webservers.
Good answers already - if you don't want to implement the limiter yourself, there are also solutions like 3scale (http://www.3scale.net) which does rate limiting, analytics etc. for APIs. It works using a plugin (see here for the ruby api plugin) which hooks into the 3scale architecture. You can also use it via varnish and have varnish act as a rate limiting proxy.
I'd recommend implementing the rate limits outside of your application since otherwise the high traffic will still have the effect of killing your app. One good solution is to implement it as part of your apache proxy, with something like mod_evasive

Resources