I have a strange szenario:
Webserver / Appserver (Java) sends requests to many different satellite systems (on customers site). Only satellite systems can initiate connection due to firewall rules.
The model I think should be something like REQ/REP, but here the REQuester have to bind and the REPlyer would have to connect.
Is this possible and a stable architecture?
Are there better solutions? (We first had WebSockets in mind...)
Remark: we don't have to use Java on both ends. To be precise on customers site we have Delphi, but we could bridge it somehow.
The model I think should be something like REQ/REP, but here the
REQuester have to bind and the REPlyer would have to connect.
This will be problematic. When the server initiates the connection, it must be aware of all peers and their bind address. Not a big deal for a handful of peers, but for many peers changing constantly, it's a mess.
Only satellite systems can initiate connection due to firewall rules.
If that's the case, your mileage will vary with WebSockets; google around, lots of info on this.
Are there better solutions?
Well, with ZeroMq, one solution that comes to mind to support client request initiation is this:
Server binds with ROUTER
Clients connect with DEALER.
This approach offers bi-directional request/reply, does not block (asynchronous), and eliminates the client-side bind problem mentioned in your question. Here, the server binds, and either side can initiate the conversation.
I recommend reading this section in the guide, it covers extended async request/reply and message enveloping, important when using ROUTER/DEALER sockets.
Related
I would like to understand networking services with a large user base a bit better so that I know how to approach a project I am busy with.
The following statements that I make may be incorrect but they still lead to the question that I want to ask...
Please consider Skype and TeamViewer clients. It seems that both keep persistent network connections open to their respective servers. They use these persistent connections to initiate additional connections. Some of these connections are created by means of Hole Punching if the clients are behind NATs. They are then used for direct Peer-to-Peer communications.
Now according to http://expandedramblings.com/index.php/skype-statistics/ there are 300 million users using Skype and 4.9 million daily active users. I would assume that most of that 4.9 million users will most probably have their client apps running most of the day. That is a lot of connections to the Skype servers that are open at any given time.
So to my question; Is this feasible or at least acceptable? I mean, wouldn't it be better to not have a network connection open while idle and aspecially when there are so many connections open to the servers at once? The only reason I can think is that it would be the only way to properly do Hole Punching. Techically, how is this achieved on the server side?
Is this feasible or at least acceptable?
Feasible it certainly is, you mention already two popular apps that do it, so it is very doable in practice.
As for acceptable, to start no internet authority (e.g. IETF) has ever said it is unacceptable to have long-lived connections even with low traffic.
Furthermore, the only components for which this matters are network elements that keep connection/flow state. These are for sure the endpoints and so-called middleboxes like NAT and firewalls. For the client this is only one connection, the server is usually fine tuned by the application developers (who made this choice) themselves, so for these it is acceptable. For middleboxes it's simple: they have no choice, they're designed to just work with all kind of flows, including long-lived persistent connections.
I mean, wouldn't it be better to not have a network connection open while idle and aspecially when there are so many connections open to the servers at once?
Not at all. First of all, that could be 'much' slower as you'd need to set up a full connection before each control-plane call. This is especially noticeable if your RTT is big or if the servers do some complicated connection proxying/redirection for load-balancing/localization purposes.
Next to that this would historically make incoming calls difficult for a huge amount of users. Many ISP's block/blocked unknown incoming connections from the internet by means of a firewall. Similar, if you are behind a NAT device that does not support UPnP or PCP you can't open a port to listen on for your public IP address. So you need it even aside from hole-punching.
The only reason I can think is that it would be the only way to
properly do Hole Punching. Techically, how is this achieved on the
server side?
Technically you can't do proper hole-punching as soon as the NAT devices maintain a full <src-ip,src-port,dest-ip,dest-port,protocol> (classical 5-tuple) flow match. Then the best you can do with 'hole punching' is set up a proxy between peers.
What hole-punching relies on is that the NAT flow lookup is only looking at <src-ip,src-port,protocol> upstream and <dest-ip,dest-port,protocol> downstream to do the translation. In that case both clients just set up a connection to the server, their ip and port gets translated and the server passes this to the other client. The other client can now start sending packets to that translated <ip,port> combination which should work because NAT ignores the server's ip/port. But even if the particular NAT would work like this, some security device (e.g. stateful firewall) might detect session hi-jacking and drop this anyway.
Nowadays you rather use UPnP to open up a port to listen on your public IP which is much easier if supported.
I have a TCP/IP based component which is communicating with a c++ based system. In fact it is reading raw bytes from that system and then marshaling those raw bytes in objects and storing it in the DB. This multi-threaded tcp/ip based component is in java and could be deployed on a dual core or quad core processor (not sure if its important for my question but nevertheless a detail I am giving). Now I have a few questions:
How can I scale this tcp/ip based component. This component is deployed on a server and is listening to a port. In future if there's more data that is envisaged at this point that comes from the C++ system we should be able to scale this java component.
What about security. One thing which I can probably do is employ this communication on secure sockets or probably get encrypted data (any particular encryption that I could use here??). Any other way to take care of security?
There is also a requirement of high availability to be satisfied. How do I handle that? How could I possible have redundancy here?
Yes, we are working on the system architecture of a product and therefore, I was wondering if some experienced architect or designer could help me.
How can I scale this tcp/ip based component. This component is deployed on a server and is listening to a port. In future if there's more data that is envisaged at this point that comes from the C++ system we should be able to scale this java component.
You normally use a network load-balancer to scale these kind of services across multiple servers. That load-balancer can distribute load using a variety of algorithms, such as:
CPU load (usually measured with snmp)
Client ip address (if you need persistence when mapping clients to your services)
Number of active sockets
etc
Look at HAProxy for a popular open-source load-balancer. F5 has the most popular commercial load-balancer solution.
What about security. One thing which I can probably do is employ this communication on secure sockets or probably get encrypted data (any particular encryption that I could use here??). Any other way to take care of security?
As mentioned, SSL is an option, but understand that is a big performance hit on your services if you encrypt on the same hardware that is performing your customer services. One option along these lines is using a commercial load-balancer that implements SSL in hardware; that load-balancer would then forward unencrypted sockets to your TCP services farm.
Under some circumstances you can use IPSec network-level encryption; often, this is another network hardware solution. Typically your clients will download an IPSec application that resides on their PC... then they make a connection into your IPSec server, which encrypts between their client and your IPSec termination point
SSH Tunneling with port-forwarding (low-tech solution)
tcpcrypt looks interesting as a future technology, but I'm not sure how mature it is right now.
There is also a requirement of high availability to be satisfied. How do I handle that? How could I possible have redundancy here?
A lot depends on what you mean by high availability, and what kind of recovery timing you need. At a high level, you have a few options:
DNS-based HA works if you don't need client to socket mapping persistence; if you use DNS, you need to be willing to accept typical DNS A-record timeouts (usually people don't go lower than ~5 minutes / 300 seconds). This also assumes you find a way to synchronize your databases across multiple sites.
Load-balancer solutions. Same issue with synchronizing back-end databases
To do any kind of HA, you probably want to hire a consultant that has a proven track record of implementing these services (if you don't have this kind of resource in-house).
I have a custom pair of client/server sockets (TJDServerSocket and TJDClientSocket) which wrap the TServerSocket and TClientSocket in the ScktComp unit. I don't have any issues to fix, but would like to know something. I'd like to add a feature to the client side to automatically search the network for any instances of a server socket (specifically my server component).
I'm open to any suggestions, but has to be specific to the use of the ScktComp unit in Delphi 7.
Here's a link to the components of mine.
Never used the TServerSocket and TClientSocket myself, and I don't have the help files within reach, so I can't immediately see if this would work with those components.
For a project I did I needed something like that too. I ended up with using UDP to broadcast a discovery request (within the same subnet of course). The server, listening on a particular port for such a request, would reply its data back. When multiple servers would exist (a situation that though rare could occur) the client just picked the server with the required service(s) and the least load. That load was part of the data the server send back.
It worked out nice, wasn't all that difficult to write, and turned out reasonably efficient too.
The request protocol is completely up to you. The one I devised allowed clients to send a request detailing the services they need, and servers replying listing their services and the load (= connected clients in active use).
After selecting the server to talk to, a client would register itself for the services it needed, and could use them after that.
Hope this helps.
There are some standard protocols for service discovery. See for example: http://en.wikipedia.org/wiki/Zero_configuration_networking
Mailslots is a nice option here. It'll broadcast to every PC on your subnet. See Jeroen's answer to this question:
Suggestions on writing a TCP IP messaging system (Client/Server) using Delphi 2010
Searching is as easy as port scanning.
If you don't like the brute force approach, the server can register itself to a well known service application (could be a web server), and the client can connect to the service application to ask. It's quieter than broadcasting.
With more information, such as details about the network (who's it for?), I can suggest a more precise answer.
Hi
let me make my question clear. Two people using my app are connected to the internet. Both have each other's IP and they want to chat (like Y!messanger) with each other.
I think I need to use Indy components; right? Which component should I use?
Thanks in advance
Have you looked at any of the demos on Indy's website yet?
In general, you are looking to create a "Client/Server" type application. A quick Google search for "indy client server example" pulls up lots of results, including this one: http://www.devarticles.com/c/a/Delphi-Kylix/A-Real-World-Client-Server-Application-in-Delphi/
In reality, this gets a lot more complicated when you have firewalls and NATs with private IP addresses. You will have to consider how your application will either get around or through these types of technologies.
Similar to what Scott said, I think that your biggest problem is getting them talking to each other. My computers at home go through a router, which blocks all incoming connection requests (i.e. requests to start a conversation between two computers) from the Internet. My computers can send connection requests OUT, and start a conversation that way, but unless you modify the router (port forwarding) my computers can not receive connection requests.
You need a server somewhere to which both people will connect, that can then relay messages back and forth. To get really tricky, once the connection is made to the server the two computers can then be put into direct contact, but that involves UDP packets and some clever magic.
You don't have to use Indy components, you just need anything that will handle communications over the network. Any HTTP or sockets network stack will do. Indy is the defacto standard for Delphi Win32.
To do network communications, you will need to create a listener object or service on machine A and a sender object on machine B to send a network message from A to B. To send a message from B to A, you will need a reverse path as well - 4 objects total to perform bidirectional comms. Some object wrappers hide this detail internally. I don't recall offhand whether Indy hides this or not.
It would probably be easiest if you use a common TCP/IP protocol for your machine to machine communications, such as HTTP. This will make it easier to get your connections through firewalls and proxies that frequently exist between arbitrary users. To avoid conflicting with any HTTP web services that might be running on either machine, you should use a custom port number with the IP address: 192.168.1.10:12345, not the standard HTTP web server port 80. This is what most of the IM clients do.
I'm not a network specialist so my apologies if i've used some of the domain terminology incorrectly, etc. For web metrics/analytics, we currently use both client-side (js page tags) and server-side (log files) data. Neither gives us "delivery" information (e.g., connection speeds), hence the interest in Network Collectors. We are in a switched environment so installing the N/C as if it were a web server, i.e., on a switch port, won't allow it, i don't think, to see the web server traffic.
After some research, i've learned how to place the N/C by configuring a monitoring port. What concerns me about this is the m/p appears work by duplicating the traffic within the switch.
Is there are better solution for N/C placement in this type of network environment?
Don't worry Doug, switches nowadays won't falter under this sort of load. The way you have explained is quite OK.
Of course, you could buy a more expensive switch with "NetFlow" sort of support... and have the switch collect the data for you....