I've used influxDB for a while now, but never had to continuously stream data from it. A simple GET /query was sufficient. But now I need a way to stream data to the frontend to draw pretty graphs and such.
So far we've been running GET /query periodically from the frontend, but this is highly inefficient. I would much rather get keep the connection open and receive the data when it's written to the DB. Searching the interwebs there doesn't seem to be support neither for websockets, nor for HTTP/2 in influxDB right now.
So, question to others, who possibly hit this issue - how did you solve this?
InfluxDB v1.x supports subscriptions. As data is written to InfluxDB, writes are duplicated to subscriber endpoints via HTTP, HTTPS, or UDP in line protocol.
https://docs.influxdata.com/influxdb/v1.7/administration/subscription-management/
Related
I'm glossing over their documentation here :
http://www.rubydoc.info/github/github/statsd-ruby/Statsd
And there's methods for recording data, but I can't seem to find anything about retrieving recorded data. I'm adopting a projecting with an existing statsd addition. It's host is likely a defunct URL. Perhaps, is the host where those stats are recorded?
The statsd server implementations that Mircea links just take care of receiving, aggregating metrics and publishing them to a backend service. Etsy's statsd definition (bold is mine):
A network daemon that runs on the Node.js platform and listens for
statistics, like counters and timers, sent over UDP or TCP and sends
aggregates to one or more pluggable backend services (e.g.,
Graphite).
To retrieve the recorded data you have to query the backend. Check the list of available backends. The most common one is Graphite.
See also this question: How does StatsD store its data?
There are 2 parts to statsd: a client and a server.
What you're looking at is the client part. You will not see functionality related to retrieving the data as it's not there - it normally is on the server side.
Here is a list of statsd server implementations:
http://www.joemiller.me/2011/09/21/list-of-statsd-server-implementations/
Research and pick one that fits your needs.
Statsd originally started at etsy: https://github.com/etsy/statsd/wiki
Let's say we have an app that displays some kind of dashboard. This dashboard however should be updated extremely often(say at every 500ms). I'm familiar with long pull requests and know how I could implement them with NSURLConnection in some background thread. However it seems this will lead to two big problems - request/response concurrency and overhead of long pull requests at such short time intervals. Although first problem can be solved with some techniques, I think such frequent requests to a server is a general problem.
So after some research I found NSStream class, and it's descedants NSInputStream & NSOutputStream. My idea is to make connection to server and keep it alive for the whole time. And just at 500ms intervals to send GET request at output stream and read data from the input stream.
So here are my questions:
Am I on the right track for implementing this?
Should the server be prepared on some special way of dealing with this kind of connections(I mean won't it drop the connection after some timeout)?
Is there real benefit of skipping connection establishing to improve app performance and to lower refresh time at the dashboard?
UPDATE
I've implemented classic way. When I hit the method for requesting if previous request not yet finished I'm cancelling it. So basically I've only one active connection at a time to prevent concurrency. Also if I didn't receive response for 500ms I do not need this response at all, as it will be outdated anyway. I'm accomplishing pretty neat results in both Wi-Fi and 3G. As I expected on edge there is dropped response every 3 to 4 requests.
Still wondering however about the streams. I did try to follow this apple ref, but when I send HTPP GET via output stream, my input stream return 403 Forbidden from the server. This could be entirely server problem, however I'm not sure if this is the right track and whether it's worthy to change server side.
Q1) Am I on the right track for implementing this?
A) I'd suggest WebSockets
Q2)Should the server be prepared on some special way of dealing with
this kind of connections(I mean won't it drop the connection after
some timeout)?
A)Even though you could try Configuring
Persistent(Keep-Alive)Connections on webserver to do it easily
I'd suggest WebSockets
Q3)Is there real benefit of skipping connection establishing to
improve app performance and to lower refresh time at the dashboard?
A)Yes,Connection opening and closing are costly process that's why
there are Keep-alive connection and Google also introduced SPDY
for Webapps.so Sockets would solve this problem for you.
WebSockets
is good way to go.
Frequent polling is not a way to go because you contact the server very frequently 0.5 seconds
WebSocket provides full-duplex communication.Additionally, WebSocket enables streams of messages on top of TCP. TCP alone deals with streams of bytes with no inherent concept of a message
The WebSocket protocol was standardized by the IETF as RFC 6455 in 2011, and the WebSocket API in Web IDL is being standardized by the W3C
WebSocket is designed to be implemented in web browsers and web servers, but it can be used by any client or server application. The WebSocket Protocol is an independent TCP-based protocol. Its only relationship to HTTP is that its handshake is interpreted by HTTP servers as an Upgrade request. The WebSocket protocol makes more interaction between a browser and a website possible, facilitating live content and the creation of real-time games. This is made possible by providing a standardized way for the server to send content to the browser without being solicited by the client, and allowing for messages to be passed back and forth while keeping the connection open. In this way a two-way (bi-directional) ongoing conversation can take place between a browser and the server
You can find more about WebSockets here
Here are some good WebSocket client libraries on Objective C
SocketRocket and
UnittWebSocketClient
Note:
These libraries use NSStream
Hope this helps
As long as your server is HTTP server, server disconnect you after returning result.
So, if you want to keep connection alive long enough, you must implement your own protocol based on NSStream/Socket both iOS and Server.
You may choose famous socket based protocol like WebSocket, Square's SocketRocket is famous library for iOS based on NSStream.
If your dashboard needs real time update, I think it's worth deploying NSStream/Socket based protocol.
I would like to synchronize IMAP mailbox from one server to another server in realtime. Currently I am using imapsync software which uses (pull method) from one source server to (destination) server to sync the messages.
I have tried mbsync (isync) also but It only supports remote IMAP4 type mailbox not Maildir type.
Can anyone suggest realtime mailbox syncing solution which must be support Maildir remote mailbox type.
As Max said, the issue is not with IMAP. IMAP is merely the protocol for serving messages to clients.
What you want to do is to synchronize the backing store, i.e. the place where the messages are kept. It sounds like you are using Maildir. I don't know Maildir, but since it uses the local filesystem, I would try to sync that between the servers. The simplest solution that I can think of here is to use something like Dropbox for this.
There are some practical reasons why realtime syncing of IMAP messages could be hard though, one of which is that the UID values have to increase with time, so you can not have two servers allocating UIDs without each of them knowing what the last UID was.
I don't think stackoverflow is the right setting for this question, since it does not sound like a development problem. Maybe try https://serverfault.com/
I am writing an application that keeps track of content pushed around between users of a certain task. I am thinking of using WebSockets to send down new content as they are available to all users who are currently using the app for that given task.
I am writing this on Rails and the client side app is on iOS (probably going to be in Android too). I'm afraid that this WebSocket solution might not scale well. I am after some advice and things to consider while making the decision to go with WebSockets vs. some kind of polling solution.
Would Ruby on Rails servers (like Heroku) support large number of WebSockets open at the same time? Let's say a million connections for argument sake. Any material anyone can provide me of such stuff?
Would it cost a lot more on server hosting if I architect it this way?
Is it even possible to maintain millions of WebSockets simultaneously? I feel like this may not be the best design decision.
This is my first try at a proper Rails API. Any advice is greatly appreciated. Thx.
Million connections over WebSockets, using Ruby, I can't see its real if you not using clustering to spread connections between different instances to handle all the data processing.
The problem here is serializing and deserializing data.
As well you have to research of how often you will need to pull data to client from server, and if it worth to have just periodical checks using AJAX, then handling connection for whole time. Because if you do handle connection and then you not using it - it is waste of resources. WebSockets are build on top of TCP layer, and all connections are not "cheap" as well going through for OS and asking them for data available again is not the simple process, with millions connections it is something really almost impossible without using most advanced technologies in the world.
I head that Erlang is able to handle millions of connections, but I don't have details over it. As well connection is one thing, another is processing data and interaction between connections - this you might want to check, because if you have heavy processing algorithms, then you definitely need to look into horizontal scaling options over clustering solutions.
If you are implementing chat, use websockets.
If you are implementing 1 way messages in realtime use server sent events.
If you are implementing 1 way messages sent every few hours or so, use APNS.
The saying goes phone in hand, use websockets / server sent events.
Phone in pocket, use APNS.
APNS will alleviate wifi dips, tcp/ip socket hangs and many other issues. Really useful. There is the chance that it may take a little time to get through. But then again, there is the chance that websockets will take
Recent versions of iOS let you send APNS to the client without a popup message to the client so it can ask the server for more information. That along with some backgrounding implementations really improves things.
If possible, do not implement totally anonymous clients. It is very tricky to detect if a client reinstalls the app. So you'll end up sending duplicates to the client. Need to take that into account.
APNS looks trivial to implement in ruby, but I'd suggest avoiding the urge and going to using an existing gem/service out there that supports both google and apple. It is much trickier to implement than it may seem at first.
If you decide to stick with websockets, it may make sense to just leverage websockets in nginx like https://github.com/wandenberg/nginx-push-stream-module
ASIDE:
Using SMS where speed is critical is very expensive. $1/month per phone number only sends a max rate of 1 message per second. So sending 100 messages per second = $100/month plus message fees. Do note that 100 messages at a rate of 50 messages/second = $50/month. But if you want to send 1k messages, that takes 20 seconds.
Good luck
Fellow StackOverflowers,
we are building an iOS application that will record data which will have to be sent back to our server at certain times. The server will not be sending back any data to the client, other than confirmation that the data has been received successfully. Processing load on the server may become an issue, so we want to design our server/client communication such that overhead is kept as low as possible.
1) Would it be wise to use PHP to write the received data to filesystem/database? It's easy and maintainable, but may be a lot less efficient than, for example, a Java application in Glassfish (or a 'hand-coded' server daemon in C if we choose the raw socket connection).
2) Would it be wise to write the received data directly to the MySQL database (running on the same server), or do you think we should write the data to filesystem first and parse it to a database asynchronously to the reception of the data (i.e., at a time when the server has resources to spare)
3) Which seems wiser: to use a protocol such as HTTP or FTP, or to build our own server daemon and have the clients connect to a socket and push data over it like in this heavily simplified example:
SocketFD = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
write(SocketFD, theData, sizeOfTheData);
Or, as Krumelur points out, maybe this is a non-issue with regard to server load?
Thanks in advance!
The answer to all three these questions depends on your budget and how serious the load will be.
I think php isn't a wise choice. If you have the time and skill to write something in C or C++ or something, I'd recommend doing that. Especially because that would provide thread control. If you're budget doesn't reach that far, Java, as you suggested, would be a good option, or maybe Ruby or Python.
I would suggest using sqlite for storing the data in the app. If only part of the data is send and you can keep that part separate from the rest, consider putting all that data in a separate sqlite db. You can than send that entire file. If you need just a part of the data and are concerned with the server load so much, than I guess you have two options. Ether let the app create a sqlite file with all data to transfer and send that file. Or just send a serialized array.
On first thought I'd say you should use a sqlite db on the server side too, to ease the process of parsing from incoming data to db. On second thought that's a bad idea since sqlite doesn't support multithreading, and if you're load is going to be so huge that's not desirable.
Why not use websockets? There are daemons available in most languages. You could open a socket with every client that wishes to send data, than give a green light "send it now" whenever a thread for processing comes available. After the trafic is complete you dispose the connection. But this would only be efficient if the number of requests is so huge the server has to handle rescheduling so much that it would take more cpu than just do what Krumelur sugests.
I wonder what you're building, and how it will be producing such a massive server load!