How to show transfer progress during a DataSnap transmission? - delphi

I have a ISAPI DataSnap server and a client application, which communicate over the web. I have been looking for a way to show data transmission progress when the client application is retrieving data or applying updates, but so far I haven't found anything except to set ClientDataSet.PacketRecord to a small number and run a loop to retrieve packets. Since my data contains BLOB data, this method isn't very practical, as each record might grow beyond 1024KB. Is there a way to monitor the actual TCP/IP communication between my client application and the server?
Is it possible to throw a TIdHTTPProxyServer on my client application and monitor data transmission using it?
Update:
Even if that's possible, I'm concerned about the send/receive routine being executed in the main thread, and thus blocking any GUI activity. I read somewhere that I could execute these calls (Refresh and ApplyUpdates) in separate threads, but I haven't got a clue on how to do this.

Related

queue and flow control with delphi using indy

i have a client server application (TCP) that's designed with indy delphi.
i want to have a queue and flow control in my server side application.
my server should not lose any clients data when server traffic is full.
for example , in my server side application i want determine maximum of bandwidth for server is 10Mbps and then if server bandwidth (this 10Mbps) was full the other clients be on queue until bandwidth get free .
so i want to know how can i design this with delphi ?
thanks
best regard
The client should not send the message directly to the server. Put the message in a local store (f.i. sqlite-db) and in a thread you read the first message from the local store and try to send it to the server.
If the message was delivered to the server (no exception raised) delete the message from the local store and process the next "first" message in the local store.
Within the TIdTCPServer.OnExecute method which receives the client data, it is possible to 'delay' processing of the incoming request with a simple Sleep command. The client data will stay in the TCP socket until the Sleep command finished.
If your server keeps track of the current 'global' bandwidth usage for all clients, it is possible to set the Sleep time dynamically. You could even set different priorities for different clients.
So you would need a simple but thread safe bandwidth usage monitor, an algorithm which calculates sensible Sleep time values, and a way to assign this Sleep time to the individual client connection contexts.
See also:
https://en.wikipedia.org/wiki/Token_bucket
https://github.com/bandwidth-throttle/token-bucket for an example implementation in PHP
http://www.nurkiewicz.com/2011/03/tenfold-increase-in-server-throughput.html

How can I prevent Delphi from throwing exception when 2 transactions are open (using FireDac component and FireBird dbms)?

I built an Web API with Delphi using FireDac and an HTTPServer component: The application is using a dbms powered by firebird.
Everything was working fine until I the moment I started to simulate multiple requests to the same API endpoint. This is causing internal server exceptions reporting that a second trasaction is being opened when there are already a transaction opened.
I know that all connections are being closed after being used and objects are being destroyed in order to prevent memory leaks but I couldn't understand why do the application triggers the exception.
Any help or toughs that might drive me to a solution?
Multiple requests will be processed concurrently by the HTTP server.
So if two clients try to access the same resource (URL) at the same time, the server will need two sets of database connections and data access components.
If your application uses distinct objects - one set per client - and does this in a thread safe way, both connection should work fine.
However if you use only one datamodule to serve all incoming HTTP requests, proper serialization is required. It does not help to close connections after use, the connections must be used only from one thread at a time.
So to understand the potential reason of the error, more information about the actual design of your server is needed.

What techniques to use for server side data reception of large-scale mobile app

Fellow StackOverflowers,
we are building an iOS application that will record data which will have to be sent back to our server at certain times. The server will not be sending back any data to the client, other than confirmation that the data has been received successfully. Processing load on the server may become an issue, so we want to design our server/client communication such that overhead is kept as low as possible.
1) Would it be wise to use PHP to write the received data to filesystem/database? It's easy and maintainable, but may be a lot less efficient than, for example, a Java application in Glassfish (or a 'hand-coded' server daemon in C if we choose the raw socket connection).
2) Would it be wise to write the received data directly to the MySQL database (running on the same server), or do you think we should write the data to filesystem first and parse it to a database asynchronously to the reception of the data (i.e., at a time when the server has resources to spare)
3) Which seems wiser: to use a protocol such as HTTP or FTP, or to build our own server daemon and have the clients connect to a socket and push data over it like in this heavily simplified example:
SocketFD = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
write(SocketFD, theData, sizeOfTheData);
Or, as Krumelur points out, maybe this is a non-issue with regard to server load?
Thanks in advance!
The answer to all three these questions depends on your budget and how serious the load will be.
I think php isn't a wise choice. If you have the time and skill to write something in C or C++ or something, I'd recommend doing that. Especially because that would provide thread control. If you're budget doesn't reach that far, Java, as you suggested, would be a good option, or maybe Ruby or Python.
I would suggest using sqlite for storing the data in the app. If only part of the data is send and you can keep that part separate from the rest, consider putting all that data in a separate sqlite db. You can than send that entire file. If you need just a part of the data and are concerned with the server load so much, than I guess you have two options. Ether let the app create a sqlite file with all data to transfer and send that file. Or just send a serialized array.
On first thought I'd say you should use a sqlite db on the server side too, to ease the process of parsing from incoming data to db. On second thought that's a bad idea since sqlite doesn't support multithreading, and if you're load is going to be so huge that's not desirable.
Why not use websockets? There are daemons available in most languages. You could open a socket with every client that wishes to send data, than give a green light "send it now" whenever a thread for processing comes available. After the trafic is complete you dispose the connection. But this would only be efficient if the number of requests is so huge the server has to handle rescheduling so much that it would take more cpu than just do what Krumelur sugests.
I wonder what you're building, and how it will be producing such a massive server load!

What is the most common approach for designing large scale server programs?

Ok I know this is pretty broad, but let me narrow it down a bit. I've done a little bit of client-server programming but nothing that would need to handle more than just a couple clients at a time. So I was wondering design-wise what the most mainstream approach to these servers is. And if people could reference either tutorials, books, or ebooks.
Haha ok. didn't really narrow it down. I guess what I'm looking for is a simple but literal example of how the server side program is setup.
The way I see it: client sends command: server receives command and puts into queue, server has either a single dedicated thread or a thread pool that constantly polls this queue, then sends the appropriate response back to the client. Is non-blocking I/O often used?
I suppose just tutorials, time and practice are really what I need.
*EDIT: Thanks for your responses! Here is a little more of what I'm trying to do I suppose.
This is mainly for the purpose of learning so I'd rather steer away from use of frameworks or libraries as much as I can. Take for example this somewhat made up idea:
There is a client program it does some function and constantly streams the output to a server(there can be many of these clients), the server then creates statistics and stores most of the data. And lets say there is an admin client that can log into the server and if any clients are streaming data to the server it in turn would stream that data to each of the admin clients connected.
This is how I envision the server program logic:
The server would have 3 Threads for managing incoming connections(one for each port listening on) then spawning a thread to manage each connection:
1)ClientConnection which would basically just receive output, which we'll just say is text
2)AdminConnection which would be for sending commands between server and admin client
3)AdminDataConnection which would basically be for streaming client output to the admin client
When data comes in from a client to the server the server parses what is relevant and puts that data in a queue lets say adminDataQueue. In turn there is a Thread that watches this queue and every 200ms(or whatever) would check the queue to see if there is data, if there is, then cycle through the AdminDataConnections and send it to each.
Now for the AdminConnection, this would be for any commands or direct requests of data. So you could request for statistics, the server-side would receive the command for statistics then send a command saying incoming statistics, then immediately after that send a statistics object or data.
As for the AdminDataConnection, it is just the output from the clients with maybe a few simple commands intertwined.
Aside from the bandwidth concerns of the logical problem of all the client data being funneled together to each of the admin clients. What sort of problems would arise from this design due to scaling issues(again neglecting bandwidth between clients and server; and admin clients and server.
There are a couple of basic approaches to doing this.
Worker threads or processes. Apache does this in most of its multiprocessing modes. In some versions of this, a thread or process is spawned for each request when the request arrives; in other versions, there's a pool of waiting threads which are assigned work as it arrives (avoiding the fork/thread create overhead when the request arrives).
Asynchronous (non-blocking) I/O and an event loop. This is basically using the UNIX select call (although both FreeBSD and Linux provide more optimized alternatives such as kqueue). lighttpd uses this approach and is able to achieve very high scalability, but any in-server computation blocks all other requests. Concurrent dynamic request handling is passed on to separate processes (via CGI) or waiting processes (via FastCGI or its equivalent).
I don't have any particular references handy to point you to, but if you look at the web sites for open source projects using the different approaches for information on their design wouldn't be a bad start.
In my experience, building a worker thread/process setup is easier when working from the ground up. If you have a good asynchronous framework that integrates fully with your other communications tasks (such as database queries), however, it can be very powerful and frees you from some (but not all) thread locking concerns. If you're working in Python, Twisted is one such framework. I've also been using Lwt for OCaml lately with good success.

Server side synchronization for mobile applications or client side synchronization

if a mobile application needs to get data from multiple servers, is it better to call each server from the mobile device, or call one server which then talks to all the other servers?
"should synchronization be initiated by the server or the mobile client?" to what degree does client do the book keeping.
Say if the application is mobile email or voicemail client in both cases.
Some of the main issues with mobile synchronization of personal information are the battery life of the handset and the temporary loss of connectivity.
That's why the usual way of doing what you describe is to have a server handle most of the complicated logic and multiple data sources to create the set of data to be synchronized and then have a proprietary protocol between the server and the client to mirror just that set of data.
In effect, connection to the server will always be initiated by the client, no matter how much people talk about "push" e-mail. Your client application can have a user option to make the phone stay online as much as the network conditions allow. The server can react to a connection being established by automatically sending the latest data it needs synchronized with the client.
Very vague question, but I would say both could be necessary. Your servers should coordinate as much as they need to make sure the data stored between them stays consistent. A buggy or malicious client should not be able to cause corruption or inconsistencies in the data stored on the server. The client should do whatever synchronization it needs to make sure that the local copy of the data is consistent and that it is not uploading garbage to the servers.

Resources