How is it possible to synchronize data between multiple devices? - storage

Let's say you are writing a program that many clients use. One server will only be able to handle the connections to a certain amount. The more connections you need to handle, the more power you need until you get a server farm containing different devices.
If you for example run an application where different clients can store data on your servers how is it possible to synchronize data on each device? Which hardware/software solutions exist? Or how is all the data stored?

I can suggest an idea for a manual program creation , using file system only , you can exchange files between clients and server , but the server program will , in a period of time ( for example every 5 minutes ) broadcast the list of all his files to all connected clients and the exchanges will have to wait then ( if we are talking about a big volume of files) if its for small files then a 30 sec or 1 minute can be enough

There are many forms to do that...
I think you will need to have a single source of truth, let's say (the server on this case).
Then you can have a incremental number version (id) that contains the latest changes.
So each device can poll that number version only in order to know if is up to date.
If not, the device can ask the server the changes from the version that device has.
Each time a device makes a change, that change is stored on the server, and the version is incremented.
That can be one basic implementation. If you need real time updates, you can add to that implementation some publish-suscribe channel using Sockets... or some service like channels from pusher.com
So, every time a device made a change on the server, you can send a notificaiton on the channel with some information (the change) or the new ID of versiĆ³n and all the devices can update the information via the change (if is only one) or ask the server all the changes if there are many (like the device was disconnected from internet for a while).

Related

Handling streaming data from a mobile app (via POST)

At some point a dedicated iot device and app may be created but I'm working with an app on an iPhone for now that doesn't address the requirements but still helpful.
The app can stream it's data via POST. I have a php file set up that captures the data and writes it out to a csv file.
The data is a time series with several columns of data that is sent as a POST at every second. It's about 10 minutes in total time.
Instead of writing to a csv, the data needs to be persisted to a database
What I'm unsure about...
Since this is just testing a proof of concept, it may not be an issue till later but can the high frequency of new connections inserts be expensive? Assuming that each for POST a new connection is needed. For now I have no way of authenticating the device so I'm assuming I can use a local account for all known devices.
Is there a better way of handling the data than running a web server with a php script that grabs the data? I was thinking of Kafka + a connector for a database to persist the data but I have no way of configuring the mobile app to know what it needs to do to send data to the server. Communication is not both ways. Otherwise, my experience with POST requests are the typical web form inputs
Anyone able to give some guidance?

Erlang/Elixir on Docker and Hot Code Swap

One of the features of Erlang (and, by definition, Elixir) is that you can do hot code swap. However, this seems to be at odd with Docker, where you would need to stop your instances and restart new ones with new images holding the new code. This essentially seem to be what everyone does.
This being said, I also know that it is possible to use one hidden node to distribute updates to all other nodes over network. Of course, just like that is sounds like asking for trouble, but...
My question would be the following: has anyone tried and achieved with reasonable success to set up a Docker-based infrastructure for Erlang/Elixir that allowed Hot-code swapping? If so, what are the do's, don'ts and caveats?
The story
Imagine a system to handle mobile phone calls or mobile data access (that's what Erlang was created for). There are gateway servers that maintain the user session for the duration of the call, or the data access session (I will call it the session going forward). Those server have an in-memory representation of the session for as long as the session is active (user is connected).
Now there is another system that calculates how much to charge the user for the call or the data transfered (call it PDF - Policy Decision Function). Both systems are connected in such a way that the gateway server creates a handful of TCP connections to PDF and it drops users sessions if those TCP connections go down. The gateway can handle a few hundred thousand customers at a time. Whenever there is an event that the user needs to be charged for (next data transfer, another minute of the call) the gateway notifies PDF about the fact and PDF subtracts a specific amount of money from the user account. When the user account is empty PDF notifies the gateway to disconnect the call (you've run out of money, you need to top up).
Your question
Finally let's talk about your question in this context. We want to upgrade a PDF node and the node is running on Docker. We create a new Docker instance with the new version of the software, but we can't shut down the old version (there are hundreds of thousands of customers in the middle of their call, we can't disconnect them). But we need to move the customers somehow from the old PDF to the new version. So we tell the gateway node to create any new connections with the updated node instead of the old PDF. Customers can be chatty and also some of them may have a long-running data connections (downloading Windows 10 iso) so the whole operation takes 2-3 days to complete. That's how long it can take to upgrade one version of the software to another in case of a critical bug. And there may be dozens of servers like this one, each one handling hundreds thousands of customers.
But what if we used the Erlang release handler instead? We create the relup file with the new version of the software. We test it properly and deploy to PDF nodes. Each node is upgraded in-place - the internal state of the application is converted, the node is running the new version of the software. But most importantly, the TCP connection with the gateway server has not been dropped. So customers happily continue their calls or are downloading the latest Windows iso while we are upgrading the system. All is done in 10 seconds rather than 2-3 days.
The answer
This is an example of a specific system with specific requirements. Docker and Erlang's Release Handling are orthogonal technologies. You can use either or both, it all boils down to the following:
Requirements
Cost
Will you have enough resources to test both approaches predictably and enough patience to teach your Ops team so that they can deploy the system using either method? What if the testing facility cost millions of pounds (because of the required hardware) and can use only one of those two methods at a time (because the test cycle takes days)?
The pragmatic approach might be to deploy the nodes initially using Docker and then upgrade them with Erlang release handler (if you need to use Docker in the first place). Or, if your system doesn't need to be available during the upgrade (as the example PDF system does), you might just opt for always deploying new versions with Docker and forget about release handling. Or you may as well stick with release handler and forget about Docker if you need quick and reliable updates on-the-fly and Docker would be only used for the initial deployment. I hope that helps.

How to license number of users at the database?

Given a Delphi and Interbase client-server application, I'd like to license the application by the number of users at the database. How can this be done with commercial licensing software? I don't see any of those listing features that look like they would cover this. Every user initially logs on to the database. The database seems so available that it would be open to any user - or at least administrators. Would I have to also write a Delphi exe or dll to run on the server - perhaps as a function in the database - with the licensing connected to that? Not sure how to proceed.
BTW, Interbase licenses simultaneous users, but I think they wrote that right into the server, but I want something similar.
To control simultaneous client connections you definitively need a server side application.
It can be a simple tcp/ip socket server as a service (daemon on linux) or another (midas?) server layer.
When your client app starts it call a server method for example Session.Connect, here you count active connections and return false (no code) in case of maximum limit reaches.
When application closes you notify server with Session.Disconnect. to decrease connection count.
Also is a good idea to keep a live (permanent) connection between client app and server service (as I sad sockets) to handle application hangups, uncontrolled restarts and process this event for example OnSockedDisconnect on server side, to decrease connection count and handle for disconnect propery, for example write in logs etc...
Of course communication should be crypted (handshaked), to avoid unwanted guests.
You can play also with sim cardreaders etc..
This method will not provide a industrial (nuclear) level of security, but if coded corectly it can take some time even for an expert hacker to broke it.
OR, you may take a look at some ready protection tools like SafeNet (HASP protection).
Also, Firebird (and maybe Interbase) have on DB Connect / Disconnect triggers, where if user have privileges it can read connection count. But these can be easily changed if DB are stored on customer server.

Sending large amounts of data from windows app to service app

I'm building a system with some remote desktop capabilities. The client is considered every computer which is sharing its desktop, the server is considered a central server with a database which receives the images of all the multiple desktops. On the client side, I would like to build two projects: A windows service application and a VCL forms application. Each client app would presumably be running under a different user account on the computer, so there might be multiple client apps running at once, and they all send their image into this client service, which relays them to the central server.
The service will be responsible for connecting to the server, sending the image, and receiving mouse/keyboard events. The application, which is running in the background, will connect to this service some how and transmit the screenshots into the service. The goal is that one service is running while multiple "clients" are able to connect to it and send their desktop image. This service will be connected to the "central server" which receives all these different screenshots from different "clients". The images will then be either saved and logged or re-directed to any "dashboard" which might be viewing that "client".
The question is through what method should I use to connect the client applications to the client service to send images? They will be running on the same computer. I will need both the abilities to send simple command packets as well as stream a chunk of an image. I was about to use the Indy components (TIdTCPServer etc.) but I'm sure there must be an easier and cleaner way to do it. I'm using the Indy components elsewhere in the projects too.
Here's a diagram of the overall system I'm aiming for - I'm just worried about the parts on the far right and far left - where the apps connect to the service within the same computer. As you can see, since there are many layers, I need to make sure whatever method(s) I use are powerful enough to accommodate for streaming massive amounts of image data.
Communicates among processes, you can use Pipe/Mailslots/Socket, I also think while sending a stream file Shared Memory maybe the most efficient way
I've done this a few times now, in a number of different configurations. The key to making it easy for me was using the RemObjects SDK which took care of the communications part. With a thread that controls its state, I can have a connection to a server or service that is reliable, and can transfer anything from a status byte through to transferring many megabytes of data (it is recommended that you use small chunks for large data so that you have more fine grained control over errors and flow). I now have a set of high reliability templates that I can deploy to make a new variation quite easily, and it can be updated with new function calls without much hassle (first thing I do is negotiate versions between the client and server so they know what they can support). Because it all works at a high level, my code is just making "function calls" and never worrying about what the format on the wire is. Likewise I can switch from their binary format to standard SOAP or other without changing the core logic. Finally, the connections can be local, to the same machine (I use this for end user apps talking to a background service) or to a machine on the LAN or internet. All in the same code.

Push Messages from Azure application to MonoTouch (iPhone) application without Apple Push Notifications

I'm currently designing an application for iOS (using MonoTouch) that will have a server component running on Windows Azure. The application will essentially be a chat type application where users will generate messages within their clients and send them to the server, which will then need to forward those messages out (as quickly as practicable) to other clients that the user might be sending the messages to.
My question is - is there a recommended practice for architecting an application like this, where clients need to receive 'push' messages from the server?
I've considered a few options but would appreciate feedback.
The first option is to use Apple's Push Notifications service (APNs). I have two concerns about this - first, the clients only need to receive the messages when they're online (APNs sends messages through when the app is closed too, which I don't need or want); and second, there is a possibility that there will be a high volume of messages, which I know Apple would probably get unhappy about (perfectly fairly).
A second option I considered is using a web service (WCF-based) and having the client call this service every (say) 2-3 seconds, which is the maximum delay we could tolerate. This would seem to involve a great deal of potentially unnecessary network traffic, though ("have you got anything for me?", "no", repeated ad nauseum).
A third option is to maintain a persistent web service connection between the client and the server. When the client app starts it would call a web service method on a background thread. The server would hold the connection open (by not returning anything), and if any messages came through it would immediately return them. This connection might time out after, say, 2 minutes at which point it would be re-established. This seems to do what I want, but again, I'm concerned that there'd be a lot of connections open to the server at any moment, which could require server resources unnecessarily.
A fourth option is to use a persistent connection over TCP (or UDP, although from what I've found, Windows Azure doesn't support this). This seems to be a good option, but again, might be overkill in terms of server usage - there could potentially be hundreds or even thousands of clients connected at any moment.
A fifth option is to somehow have the server push messages directly to the client, perhaps by having the client run a mini web server or similar. However, as the app will be running on 3G and WiFi networks (beyond my control) I don't expect incoming ports will be open for this sort of thing.
If anyone has any other suggestions, or thinks one of the above options would be a good idea (or is a standard way of approaching this sort of problem) I'd be very interested to hear about it.
Thanks in advance,
John
You had a look at Pubnub http://www.pubnub.com/ ?

Resources