How to manage multiple Connections to Zookeeper using Curator? - connection-pooling

I am new to Zookeeper and Curator.
I want to know, If there is a way in Curator or Zookeeper through which we can maintain a Connection pool of zookeeper connections.
I tried searching the web but most of the people say one CuratorFramework/Zookeeper should be sufficient or try making connection again and again for using multiple connections parallely.
System I am designing, would require High no. of parallel reads/s from zookeeper. I am using Curator with single zookeeper connection. Multiple Zookeeper connection will allow reads to happen parallely instead of sequentially. Also, I want to reduce any delay caused by initiating a new connection again and again.
Ideally I am assuming that there should exist some Connection Pooling but if not I might need to do that by myself.
Can anyone point to something that already exists?

Related

Is there any limit for maximum number of TCP connections that can be made from a container to RabbitMQ

I am new in rabbitmq, So i am facing a in rabbitmq like "The connection cannot support anymore channels.Consider creating a new connection". So my doubt is can we create multiple tcp connections to rabbitmq from a single docker conatiner? Is there any limit for maximum number of TCP connections that can be made from a container? Please help
I tried to find out from doc but I didn't get a proper answer.
can we create multiple tcp connections to rabbitmq from a single docker container?
yes you can.
Is there any limit for maximum number of TCP connections that can be made from a container?
There is no hard-coded limit. It depends on number of CPUs/Memory etc... RabbitMQ is not different from other kinds of services.
We have a production checklist with some best
The connection cannot support anymore channels.Consider creating a new connection".
Adding too many channels inside one single connection is not recommended. There is no limit but hundreds of channels is not a good value.

Does it make sense to use zookeeper in masterless architecture?

In distributed set up using consistent hashing,eg. distributed cache implementation using consistent hashing, how can we manage the multiple nodes? By managing I mean, monitoring health and adjusting loads when one of server dies or new added and similar stuff.
Here we dont have any master as all peer nodes are same. So gossip protocol is on way. But I want to understand can we use zookeeper to manage nodes here? Or zookeeper can only be used where we need master-slave coordination ?
I think in your case, zookeeper can be used for leader election and assigning the right token range to the node when a new node joins. In very similar case previous version of Fb Cassandra used to use zookeeper for same reason however later community got rid of it. Read the Replication and Bootstrapping section of this.

Can multiple ClientSocket Components can be placed on a Form?

I am looking to write a program that will connect to many computers from a single computer. Sort of like "Command Center" where you can monitor all the remote system remotely on a single PC.
My plan is to have multiple Client Sockets on a form. They will connect to individual PCs remotely. So, they can request information from them to display on the Window. Remote PCs will be hosts. Is this possible?
Direct answer to your question: Yes, you can do that.
Long answer: Yes, you can do that but are you sure your design is correct? Are you sure you want to create parallel connections, one to each client? Probably you don't! If yes, then you probably want to run them in separate threads.
If you want to send some commands from time to time (and you are not doing some kind of constant video monitoring) why don't you just use one connection and 'switch' between clients?
I can't tell you more about the design because from your question is not clear about what you want to build (what exactly you are 'monitoring').
VERY IMPORTANT!
Two important notices to take into account before designing your app (both relevants only if the remote computers are not in the LAN (you connect to them via Internet)):
If the remote computers are running as servers, you will have lots of problems to explain your customers (if they are connected (and they probably are) to Internet via a router) how to setup the router and the software firewall. For example, if a remote computer is listening for commands from you, on port 1234 (for example) the firewall in the router will block BY DEFAULT any connection attempt from a 'foreign' computer (from you) to that port.
If your remote computers are running as clients, how they will know master's IP (your IP). Do you have a static IP?
What you actually need is one ServerSocket on the module running on your machine.
To which all your remote PC's will connect through their individual ClientSocket.
You can make your design other way round by putting ClientSocket on the module running on your machine and ServerSocket on the module running on remote machine.
But you will end up creating one ClientSocket to each ServerSocket, what if you have the number of remote servers increase.
Now if you still want to have multiple ClientSockets on your machine then as Altar said you could need a multi threaded application where each thread is responsible for one ClientSocket.
I would recommend Internet Direct (Indy) as they work well in threads, and you can specify a connect time-out per connection, so that your monitoring app will be able to get a 'negative' test result faster than with the default OS connect time-out.
Instead of placing them on the form, I would wrap each client in a class which runs an internal monitoring thread. More work initially but easier to keep independent from each other.

Is RabbitMQ more scalable than JMS outbound queue?

I want to know whether RabbitMQ is more scalable than other brokers or not?
If yes what are the specific reasons? If not how can we scale it up?
I am using rabbitmq for the first time with Spring framework.
Even a single RabbitMQ broker is ridiculously fast. A stock desktop machine can handle tens to hundreds of thousand of messages per second.
If one rabbit turns out to not be enough, RabbitMQ supports a form of light-weight clustering that's designed specifically to improve scalability. Basically, it allows you to create "logical" brokers that are made up of many physical brokers.

Java: Sharing a connection pool accross other J2SE Apps...?

So I have a connection pool setup. Which is great and all since I have an application that really needs it. However what I would like to know is if it is possible to share this connection pool with other J2SE apps? Would this even be worth it, as opposed to creating a connection pool based on each apps needs? If it would be prudent, how can I accomplish this?
It is not hard having connection pools in a single JVM doing multiple things - that is what applications servers do everyday (using JNDI to throw objects across classloaders)
The interesting part is when you have the connection pool in a separate JVM from the client code needing it, as this does not immediately allow simply asking for and getting a connection from the pool and returning it afterwards.
Basically you have two options:
Doing remote requests for all your JDBC commands over the network. This will most likely mean that the data will travel over the network twice, from the database to the connection pool, and then from the connection pool to your application. If the database connections are very expensive objects then this might be a viable solution.
Use RMI to get the connection object from the connection pool JVM to your own machine. This is a very expensive operation, but can as far as I know include the actual driver classes, allowing your connection pool to provide connections to databases not known to your application JVM. To me this would only make sense if the database connections were ridiculoulusly expensive or it was a requirement to be able to support additional databases after deployment without changing the original deployments.
Note that the primary reason for having connection pools at all is because connections are expensive to create, use shortly and then discard. Some databases more than others, e.g. MySQl is (or was when I tried) very cheap so it might be the simplest just to do that.
So. First of all: Measure what your connection pool buys you in time, and then consider if it is worth your while to centralize this further.

Resources