I need to create a distributed system, where I have the following node types:
Client [1-n instances]
Server [1-n instances]
Proxy [1 instance - basically a forwarder to any Server]
Cloud Server [1 instance - basically a forwarder to any Proxy]
[Client] -> [Cloud Server] -> [Proxy -> Server] - a Distributed Setup
[Client -> Server] - a Local Setup
[The Proxy and Server are running on the same node or network]
The Client, once on the same network with the Server, should also be allowed to connect directly on the Server instead of going through the Cloud Server / Proxy.
The Server can have multiple Clients connected to it but it can also publish messages for the Client(s) apart from responding to requests from Client(s). The Server/Cloud Server need to differentiate the clients nodes by id and know at any time whether they are connected / disconnected.
To my understanding, the Server should provide a REQ/REP endpoint in order to allow message exchange with the Proxy / Local Client and also a PUB endpoint, where the Proxy / Local Client will be subscribed for any notifications coming from the Server.
Concerning the Proxy, it looks like I will have to have a two endpoints; one on the inside and two endpoints on the outside. Basically I will have a ROUTER/DEALER endpoints for REQ/REP and XPUB/XSUB endpoints for PUB/SUB notifications targeting remote Clients. But my concern is that the proxy on the outside will always have one node to reply to and only one node subscribed to notifications and this is the Cloud Server.
Concerning the Cloud Server, it looks like I will have something similar to the Proxy I described above, but unlike the Proxy above I see that the ROUTER/DEALER and XPUB/XSUB fill the bill.
Obviously I am new to ZeroMQ and it offers a lot. I would like to focus on what is needed for the time being and I would really appreciate your help.
Well, ZeroMQ is a great tool to design & build systems with, but the first thing I would recommend anyone, being a keen young novice, or a seasoned hands-on experienced Computer Science veteran, "Forget to consider all the patterns a Plug-and-Play solution to everything."Having built "a few" distributed, low-latency systems, there are many similarities one will, sooner or later, meet in person.Some of the ZeroMQ's built-in primitives for the Formal Scaleable Communication Patterns have "almost" matching behaviour, but one needs other ordering than an in-built round-robin stepper, some other is "almost" matching, but has some particular sequence-of-steps requirements, which one cannot guarantee in the reality of the worlds of distributed-agents. Simply put, there are many moments, when one feels "almost" done, but some tiny ( or hidden ) detail makes a ( hidden ) call to "Houston, we have a problem..."
How to focus on what is needed?
Forget to think in a classical, sequential manner.
Distributed systems are several orders of magnitude more complex, than a plain, SEQ-tools programmed monolythic system. Besides the principal design targets, there are much more things, that can and will go wrong in production.
Revisit Design-rules and carefully check for new:
1. scaling aspects: define hidden needs - ( nodes, message sizes, traffic peaks )
2. blocking states: define additional signalling needs ( to allow to get out of all potential distributed-state dead-/live-locking )
3. surviveability needs - distributed system will meet lost messages, lost node(s)
4. incoherent protocol versions - for cases where no one can guarantee an enforced unity in distributed systems
5. self-defensive needs - in case a remote node starts some panic/flawed flooding of signalling/messaging channel ( OOP as-is does not provide self-defensive tools, and cannot limit remote-reqestors injected calls, built a set of protective tools for internal self-healing protection in cases, when an objects service is over-consumed or mis-used from external caller, so as to harden your design against such erroneous / malicious mode-of-operations, which a normal, typical OOP-model method typically cannot protect itself from ).
The Best Next Step:
Real-world System Architecture simply must contain more "wires"
If your code strives to go into production state, not to remain as just an academia example, there will have to be much more work to be done, to provide surviveability measures for the hostile real-world production environments.
An absolutely great perspective for doing this and a good read for realistic designs with ZeroMQ is Pieter HINTJEN's book "Code Connected, Vol.1" ( may check my posts on ZeroMQ to find the book's direct pdf-link ).
Plus another good read comes from Martin SUSTRIK, the co-father of ZeroMQ, on low-level truths about the ZeroMQ implementation details & scale-ability
Epilogue: As a bonus, theREQ/REP primitive behaviour communication pattern is dangerous per-se in the real-world as it can self-deadlock the communication processes pair in case a transport ( for whatever reason ) has lost a packet and the "pendulum"-style message delivery gets incomplete and locked forever.
Related
Basically, I want to implement a VoIP system with sip in a vps server. But it seems that it would not be able to handle more than ~20 simultaneous calls(just bare sip). What are the workarounds to this problem? Can the sip server be just used as a database to tell the clients where to find their intended targets..? Like p2p? I am quite new to sip. Additional info is appreciated.
Your VPS server looks to pretty low-key and when you say it cant handle more than 20 Cps that seems to indicate it topped out on CPU. Correct me if thats not the case.
Options to Scale SIP
Of the Shelf SIP Load balancer - Available in Virtual / Hardware / Opensource and every flavor that you want. It hides a farm of SIP Servers that you have and it can be managed to spread the load accordingly.
Unless the nature of SIP server is defined, it can be difficult to understand the bottlenecks you face and without that its difficult to give a simple solution.
SIP scalability comes from delegating as much work to the endpoints and doing as little on the servers as possible.
What you describe is a "redirect server": it accepts and stores registrations from the endpoints (softphones, hardphones, etc), and responds with "3xx redirect" to incoming calls and forgets about them immediately.
This is probably the most extreme example of server minimization. SIP is a very versatile protocol, it lets you set up your server infrastructure in many different ways with varying degree of control over calls. It lets you trade off features for performance.
Even the flimsiest VPS should be able to handle the signalling for way more than 20 parallel calls even in full "stateful proxy" mode.
Just make sure media (the RTP streams) is not routed through your server. Set up STUN to help firewalled endpoints send media to each other directly.
I have a strange szenario:
Webserver / Appserver (Java) sends requests to many different satellite systems (on customers site). Only satellite systems can initiate connection due to firewall rules.
The model I think should be something like REQ/REP, but here the REQuester have to bind and the REPlyer would have to connect.
Is this possible and a stable architecture?
Are there better solutions? (We first had WebSockets in mind...)
Remark: we don't have to use Java on both ends. To be precise on customers site we have Delphi, but we could bridge it somehow.
The model I think should be something like REQ/REP, but here the
REQuester have to bind and the REPlyer would have to connect.
This will be problematic. When the server initiates the connection, it must be aware of all peers and their bind address. Not a big deal for a handful of peers, but for many peers changing constantly, it's a mess.
Only satellite systems can initiate connection due to firewall rules.
If that's the case, your mileage will vary with WebSockets; google around, lots of info on this.
Are there better solutions?
Well, with ZeroMq, one solution that comes to mind to support client request initiation is this:
Server binds with ROUTER
Clients connect with DEALER.
This approach offers bi-directional request/reply, does not block (asynchronous), and eliminates the client-side bind problem mentioned in your question. Here, the server binds, and either side can initiate the conversation.
I recommend reading this section in the guide, it covers extended async request/reply and message enveloping, important when using ROUTER/DEALER sockets.
We're looking to implement ActiveMQ to handle messaging between two of our servers, over a geographically diverse environment (Australia to the UK and back, via the internet).
I've been looking for some vague indicators of performance round the net but so far have had no luck.
My question: compared to a DIY TCP/SSL implementation of basic messaging, how would ActiveMQ perform? Similar systems of our own can send and receive messages across Australia in 100-150ms, over a SSL layer with an already established connection.
Also, does ActiveMQ persist its TLS/SSL connections, thus saving a substantial amount of time that would already be used in connection creation/teardown?
What I am hoping is that it will at least perform better than HTTPS, at a per-request level.
I am aware that performance can vary remarkably, depending on hardware, networks, code and so on. I'm just after something to start with.
I know the above is a little fuzzy - if you need any clarification please let me know and I will only be too happy to oblige.
Thank you.
What Tim means is that this is not an apples to apples comparison. If you are solely concerned with the performance of a single point to point connection to transfer data, a direct link will give you a good result (although DIY is still a dubious design decision). If you are building a system that requires the transfer of data and you have more complex functional requirements, then a broker-based messaging platform like ActiveMQ will come into play.
You should consider broker-based messaging if you want:
a post-office style system where a producer sends a message, and knows that it will be consumed at some point, even if there is no consumer there at that time
to not care where the consumer of a message is, or how many of them there are
a guarantee that a message will be consumed, even if the consumer that first handle it dies mid-way through the process (transactions, redelivery)
many consumers, with a guarantee that a message will only be consumed once - queues
many consumers that will each react to a single message - topics
These patterns are pretty standard, and apply to all off the shelf messaging products. As a general rule, DIY in this domain is a bad idea, as messaging is complex (see http://www.ohloh.net/p/activemq/estimated_cost for an estimate of how long it would take you do do same); and has many existing implementations of various flavours (some without a broker) that are all well used, commercially supported and don't require you to maintain them. I would think very hard before going down to the TCP level for any sort of data transfer as there is so much prior art.
We intend to design a system with three "tiers".
HQ, with a single server
lots of "nodes" on a regional basis
users, with iPads.
HQ communicates 2-way with the nodes which communciate 2-way with the users. Users never communicate with HQ nor vice-versa.
The powers that be decree a Windows app from HQ (using Delphi) and a native desktop app for the users' iPads. They have no opinion on the nodes.
If there are compelling technical arguments, I might be able to beat them down from "decree" to "prefer" on the Windows program (and, for isntance, make it browser based). The nodes have no GUI, they just sit there playing middle-man.
What's the best way for these things to communicate (SOAP/HTTP/AJAX/jQuery/home-brewed-protocol-on-top-of-TCP/something-else?) Is it best to use the same protocol end to end, or different protocols for hq<-->node and node<-->iPad?
Both ends of each of those two interfaces might wish to initiate a transaction (which I can easily do if I roll my own protocol), so should I use push/pull/long-poll or what?
I hope that this description makes sense. Please ask questions if it does not. Thanks.
Update:
File size is typcially below 1MB with nothing likely to be above 10MB or even 5MB. No second file will be sent before a first file is acknowledged.
Files flow "downhill" from HQ to node to iPad. Files will never flow "uphill", but there will be some small packets of data (in addition to acks) which are initiated by user action on the iPad. These will go to the local node and then to the HQ. We are probably talking <128 bytes.
I suppose there will also be general control & maintenance traffic at a low rate, in all directions.
For push / pull (publish / subscribe or peer to peer communication), cross-platform message brokers could be used. I am not sure if there are (iOS) client libraries for Microsoft Message Queue (MSMQ), but I would also evaluate open source solutions like HornetQ, Apache ActiveMQ, Apollo, OpenMQ, Apache QPid or RabbitMQ.
All these solutions provide a reliable foundation for distributed messaging, like failover, clustering, persistence, with high performance and many clients attached. On this infrastructure message with any content type (JSON, binary, plain text) can be exchanged, and on top messages can contain routing and priority information. They also support transacted messaging.
There are Delphi and Free Pascal client libraries available for many enterprise quality open source messaging products. (I am am the author of some of them, supporting ActiveMQ, Apollo, HornetQ, OpenMQ and RabbitMQ)
Check out MessagePack: http://msgpack.org/
Also, here's more RPC discussion on SO:
RPC frameworks available?
MessagePack: fast cross-platform serializer and RPC - please share experience
ICE might be of interest to you: http://zeroc.com/index.html
They have an iOS layer: http://zeroc.com/icetouch/index.html
IMHO there are too little requisites to decide what technology to use. What data are exchanged, how often, what size? Are there request/response time constraints? etc. etc. Never start selecting a technology before you understand your needs deeply.
I need to use one logical PGM based multicast address in application while enable such application "seamlessly" running across several different geo-locations (i.e. think US/Europe/Australia).
Application is quite throughput (several million biz. messages a day) and latency demanding whith a lot of small but very frequently send messages. Classical Atom pub will not work here due some external limits of latencies.
I have come up with several options to connect those datacenters but can’t find the best one.
Options which I have considered are:
1) Forward multicast messages via VPN’s (can VPN handle such big load).
2) Translate all multicast messages to “wrapper messages” and forward them via AMQP.
3) Write specialized in-house gate which tunnels multicast messages via TCP to other two locations.
4) Any other solution
I would prefer option 1 as it does not need additional code writes from devs. but I’m afraid it will not be reliable connection.
Are there any rules to apply for such connectivity?
What the best network configuration with regard to the geographical configuration is for above constrains.
Just wanted to say hello :)
As for the topic, we have not much experience with multicasting over WAN, however, my feeling is that PGM + WAN + high volume of data would lead to retransmission storms. VPN won't make this problem disappear as all the Australian receivers would, when confronted with missing packets, send NACKS to Europe etc.
PGM specification does allow for tree structure of nodes for message delivery, so in theory you could place a single node on the receiving side that would in its turn re-multicast the data locally. However, I am not sure whether this kind of functionality is available with MS implementation of PGM. Optionally, you can place a Cisco router with PGM support on the receiving side that would handle this for you.
In any case, my preference would be to convert the data to TCP stream, pass it over the WAN and then convert it back to PGM on the other side. Some code has to be written, but no nasty surprises are to be expected.
Martin S.
at CohesiveFT we ran into a very similar problem when we designed our "VPN-Cubed" product for connecting multiple clouds up to servers behind our own firewall, in one VPN. We wanted to be able to run apps that talked to each other using multicast, but for example Amazon EC2 does not support multicast for reasons that should be fairly obvious if you consider the potential for network storms across a whole data center. We also wanted to route traffic across a wide area federation of nodes using the internet.
Without going into too much detail, the solution involved combining tunneling with standard routing protocols like BGP, and open technologies for VPNs. We used RabbitMQ AMQP to deliver messages in a pubsub style without needing physical multicast. This means you can fake multicast over wide area subnets, even across domains and firewalls, provided you are in the VPN-Cubed safe harbour. It works because it is a 'network overlay' as described in technical note here: http://blog.elasticserver.com/2008/12/vpn-cubed-technical-overview.html
I don't intend to actually offer you a specific solution, but I do hope this answer gives you confidence to try some of these approaches.
Cheers, alexis