Handling streaming data from a mobile app (via POST) - post

At some point a dedicated iot device and app may be created but I'm working with an app on an iPhone for now that doesn't address the requirements but still helpful.
The app can stream it's data via POST. I have a php file set up that captures the data and writes it out to a csv file.
The data is a time series with several columns of data that is sent as a POST at every second. It's about 10 minutes in total time.
Instead of writing to a csv, the data needs to be persisted to a database
What I'm unsure about...
Since this is just testing a proof of concept, it may not be an issue till later but can the high frequency of new connections inserts be expensive? Assuming that each for POST a new connection is needed. For now I have no way of authenticating the device so I'm assuming I can use a local account for all known devices.
Is there a better way of handling the data than running a web server with a php script that grabs the data? I was thinking of Kafka + a connector for a database to persist the data but I have no way of configuring the mobile app to know what it needs to do to send data to the server. Communication is not both ways. Otherwise, my experience with POST requests are the typical web form inputs
Anyone able to give some guidance?

Related

How is it possible to synchronize data between multiple devices?

Let's say you are writing a program that many clients use. One server will only be able to handle the connections to a certain amount. The more connections you need to handle, the more power you need until you get a server farm containing different devices.
If you for example run an application where different clients can store data on your servers how is it possible to synchronize data on each device? Which hardware/software solutions exist? Or how is all the data stored?
I can suggest an idea for a manual program creation , using file system only , you can exchange files between clients and server , but the server program will , in a period of time ( for example every 5 minutes ) broadcast the list of all his files to all connected clients and the exchanges will have to wait then ( if we are talking about a big volume of files) if its for small files then a 30 sec or 1 minute can be enough
There are many forms to do that...
I think you will need to have a single source of truth, let's say (the server on this case).
Then you can have a incremental number version (id) that contains the latest changes.
So each device can poll that number version only in order to know if is up to date.
If not, the device can ask the server the changes from the version that device has.
Each time a device makes a change, that change is stored on the server, and the version is incremented.
That can be one basic implementation. If you need real time updates, you can add to that implementation some publish-suscribe channel using Sockets... or some service like channels from pusher.com
So, every time a device made a change on the server, you can send a notificaiton on the channel with some information (the change) or the new ID of versiĆ³n and all the devices can update the information via the change (if is only one) or ask the server all the changes if there are many (like the device was disconnected from internet for a while).

Download sqllite database from django web app and use in iOS?

I'm developing a site-specific installation for an office lobby which will display content on 6 iPads. The installation has several megabytes of data which will be managed by a django webapp. I'm considering different strategies for fetching the content data from the web app. So far, I have simply been dumping the data in to xml format and fetching it via a single http request from the iPad to the content server. I then load all of the content in to memory on the iPad.
I'm beginning to have some concern that I may run in to memory issues as the amount of content grows, and that storing the entire database in-memory won't work. The natural next step is to think about a database on the iPads. I'm using sqllite for the content server. Seems to me that it may be feasible to simply download the entire database file itself and query it directly from the iPad.
Proposed Approach
Download the actual sqllite database file nightly from the django content server to each of six iPads used in an office lobby installation.
Things I like about this approach:
It could be really simple. It removes the whole web services layer from the system.
It protects against network problems nicely. If the network is unavailable, the worst problem is that the iPads display stale data, as apposed to there being no content if the system is network-dependent.
Things I don't like about this approach
I'm not sure how to safely download the file. How to I ensure that the file I'm downloading is in a valid state, and I'm not downloading while someone is updating it?
I've never heard of anybody doing this, or even considered doing it. It seems like it's far from tried and true.
My questions
Can anyone think of reasons why this is a bad idea?
How can I safely download a sqllite file with confidence that it's in a valid state?
Why don't you create a Syncing system - perhaps with JSON.
I've done something like this before - I had a central repository server on site that was running my Django web application. The different iPads would sync regularly with the web app's database making sure their local data matched the server data, if not it would update via json.
On the iPad itself, I was using phonegap's SQLITE syntax which worked perfectly for storing the clientside data. But the key was syncing this database via json to the central repositorie's database - rather than physically moving the SQLite db over to the ipad.

Sending large amounts of data from windows app to service app

I'm building a system with some remote desktop capabilities. The client is considered every computer which is sharing its desktop, the server is considered a central server with a database which receives the images of all the multiple desktops. On the client side, I would like to build two projects: A windows service application and a VCL forms application. Each client app would presumably be running under a different user account on the computer, so there might be multiple client apps running at once, and they all send their image into this client service, which relays them to the central server.
The service will be responsible for connecting to the server, sending the image, and receiving mouse/keyboard events. The application, which is running in the background, will connect to this service some how and transmit the screenshots into the service. The goal is that one service is running while multiple "clients" are able to connect to it and send their desktop image. This service will be connected to the "central server" which receives all these different screenshots from different "clients". The images will then be either saved and logged or re-directed to any "dashboard" which might be viewing that "client".
The question is through what method should I use to connect the client applications to the client service to send images? They will be running on the same computer. I will need both the abilities to send simple command packets as well as stream a chunk of an image. I was about to use the Indy components (TIdTCPServer etc.) but I'm sure there must be an easier and cleaner way to do it. I'm using the Indy components elsewhere in the projects too.
Here's a diagram of the overall system I'm aiming for - I'm just worried about the parts on the far right and far left - where the apps connect to the service within the same computer. As you can see, since there are many layers, I need to make sure whatever method(s) I use are powerful enough to accommodate for streaming massive amounts of image data.
Communicates among processes, you can use Pipe/Mailslots/Socket, I also think while sending a stream file Shared Memory maybe the most efficient way
I've done this a few times now, in a number of different configurations. The key to making it easy for me was using the RemObjects SDK which took care of the communications part. With a thread that controls its state, I can have a connection to a server or service that is reliable, and can transfer anything from a status byte through to transferring many megabytes of data (it is recommended that you use small chunks for large data so that you have more fine grained control over errors and flow). I now have a set of high reliability templates that I can deploy to make a new variation quite easily, and it can be updated with new function calls without much hassle (first thing I do is negotiate versions between the client and server so they know what they can support). Because it all works at a high level, my code is just making "function calls" and never worrying about what the format on the wire is. Likewise I can switch from their binary format to standard SOAP or other without changing the core logic. Finally, the connections can be local, to the same machine (I use this for end user apps talking to a background service) or to a machine on the LAN or internet. All in the same code.

What techniques to use for server side data reception of large-scale mobile app

Fellow StackOverflowers,
we are building an iOS application that will record data which will have to be sent back to our server at certain times. The server will not be sending back any data to the client, other than confirmation that the data has been received successfully. Processing load on the server may become an issue, so we want to design our server/client communication such that overhead is kept as low as possible.
1) Would it be wise to use PHP to write the received data to filesystem/database? It's easy and maintainable, but may be a lot less efficient than, for example, a Java application in Glassfish (or a 'hand-coded' server daemon in C if we choose the raw socket connection).
2) Would it be wise to write the received data directly to the MySQL database (running on the same server), or do you think we should write the data to filesystem first and parse it to a database asynchronously to the reception of the data (i.e., at a time when the server has resources to spare)
3) Which seems wiser: to use a protocol such as HTTP or FTP, or to build our own server daemon and have the clients connect to a socket and push data over it like in this heavily simplified example:
SocketFD = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
write(SocketFD, theData, sizeOfTheData);
Or, as Krumelur points out, maybe this is a non-issue with regard to server load?
Thanks in advance!
The answer to all three these questions depends on your budget and how serious the load will be.
I think php isn't a wise choice. If you have the time and skill to write something in C or C++ or something, I'd recommend doing that. Especially because that would provide thread control. If you're budget doesn't reach that far, Java, as you suggested, would be a good option, or maybe Ruby or Python.
I would suggest using sqlite for storing the data in the app. If only part of the data is send and you can keep that part separate from the rest, consider putting all that data in a separate sqlite db. You can than send that entire file. If you need just a part of the data and are concerned with the server load so much, than I guess you have two options. Ether let the app create a sqlite file with all data to transfer and send that file. Or just send a serialized array.
On first thought I'd say you should use a sqlite db on the server side too, to ease the process of parsing from incoming data to db. On second thought that's a bad idea since sqlite doesn't support multithreading, and if you're load is going to be so huge that's not desirable.
Why not use websockets? There are daemons available in most languages. You could open a socket with every client that wishes to send data, than give a green light "send it now" whenever a thread for processing comes available. After the trafic is complete you dispose the connection. But this would only be efficient if the number of requests is so huge the server has to handle rescheduling so much that it would take more cpu than just do what Krumelur sugests.
I wonder what you're building, and how it will be producing such a massive server load!

Server side synchronization for mobile applications or client side synchronization

if a mobile application needs to get data from multiple servers, is it better to call each server from the mobile device, or call one server which then talks to all the other servers?
"should synchronization be initiated by the server or the mobile client?" to what degree does client do the book keeping.
Say if the application is mobile email or voicemail client in both cases.
Some of the main issues with mobile synchronization of personal information are the battery life of the handset and the temporary loss of connectivity.
That's why the usual way of doing what you describe is to have a server handle most of the complicated logic and multiple data sources to create the set of data to be synchronized and then have a proprietary protocol between the server and the client to mirror just that set of data.
In effect, connection to the server will always be initiated by the client, no matter how much people talk about "push" e-mail. Your client application can have a user option to make the phone stay online as much as the network conditions allow. The server can react to a connection being established by automatically sending the latest data it needs synchronized with the client.
Very vague question, but I would say both could be necessary. Your servers should coordinate as much as they need to make sure the data stored between them stays consistent. A buggy or malicious client should not be able to cause corruption or inconsistencies in the data stored on the server. The client should do whatever synchronization it needs to make sure that the local copy of the data is consistent and that it is not uploading garbage to the servers.

Resources