How many Redis connections are used with Actioncable? - ruby-on-rails

I'm looking at moving my app's deployment to Heroku, and I'd like to determine if it can correctly run there on the basic plan before putting in the effort to migrate. The basic plan limits Redis to 20 connections.
I don't fundamentally understand the Rails/Redis connection architecture. Is there a single connection to Actioncable, which is then distributing the data, or is the connection per actual client (i.e. one connection for every browser tab)?

As per the docs,
An individual user will create one consumer-connection pair per browser tab, window, or device they have open.
ActionCable lets you identify a connection using a connection identifier, typically a global object called current_user in most cases. With this approach, you can later retrieve all open connections by a given user (and potentially disconnect them all if the user is deleted or unauthorized or have too many connections open).
Also, note that ActionCable uses a worker pool to run connection callbacks and channel actions in isolation from your server's main thread.

Related

Notify backend that container has been deployed

I want to create containers which start small webservices. Developers of our team should then upload small images which contain different services. A main backend system then uses these services.
My problem is: When a developer uploads a new service, how does the backend service know there is a new service it can use? Beforehand, when service X should be used and there was no service for that functionality, it returned just a simple message. When there is a service uploaded to do X, the main backend should use that service. But how does it know it is there and should be used?
You can add some notification to your small web service. But what do you do when the service will be down unexpectedly or network will be down at the short time? You need add some logic to clients for refreshing connection.
And docker recommends to use this way.
The problem of waiting for a database (for example) to be ready is
really just a subset of a much larger problem of distributed systems.
In production, your database could become unavailable or move hosts at
any time. Your application needs to be resilient to these types of
failures.
To handle this, your application should attempt to re-establish a
connection to the database after a failure. If the application retries
the connection, it should eventually be able to connect to the
database.
The best solution is to perform this check in your application code,
both at startup and whenever a connection is lost for any reason.

is there restriction for opening imap connection from same ip address?

Hi I am implementing Email Client Application. My requirement is i need to monitor all the mailboxes available in specified IMAP server. I am created separate TCP Connection for each mailboxes. But i am getting disconnected from IMAP Server. I am trying Gmail/yahoo for my testing purpose. Is there any restriction to open multiple connection from same ip to particular IMAP Server? Particularly in Gmail and Yahoo.
or is there anyway to Monitor all the mailboxes in Single Connection without using IMAP-NOTIFY seems it does not supported in both Gmail/Yahoo...
Please Help me out...
This is something which I have answered on stackoverflow before, but which is now only available via the wayback machine. The question was about how to "kill too many parallel IMAP connections". Reprinted below; the core takeaway message is that for some reason, most server administrators prefer to have smaller number of short-lived connections instead of more connections which are active over longer period of time, yet they spend most of their time silently idling in the background. What they do not get is that the IMAP protocol is designed with long-lived connections in mind, and trying to prevent that will lead to wasting resources because the clients will constantly resync mailboxes as they are hopping among them.
The original answer follows:
Nope, it's a very wrong idea. IMAP is designed so that monitoring a single mailbox takes one connection; in most IMAP server implementations, this means a single process. However, unless the client the user is using is terribly broken, all these connections enter the IDLE mode. In IDLE, the clients are passively notified about any updates to the mailbox state. If you disable these connections, the clients would have to activelly poll for changes in many mailboxes. Now decide for yourself -- what is worse, having ten processes sitting idle, or one process doing heavy polling every two minutes? Which of these solutions would consume more energy, CPU time and IO operations? That's for the number of parallel connections.
The second question was about the long-lived connections. Again, this is a critical aspect of IMAP -- each connection carries a lot of associated state information which is rather expensive to obtain. Unless your server implements certain extensions and your clients use them (ESEARCH, CONDSTORE, QRESYNC are the crucial bits), opening a mailbox can require O(n) operations. I don't know how many messages your users have, but do you really want to transfer e.g. message flags for 250k messages when you decided to kill a connection because it has been active for "too long"?
Finally, any reasonable IMAP server vendor offers a way to configure a per-user session limit on the number of concurrent processes. Using that is much better than maintaining a script for ad-hoc killing of "unused" connections.
If you would like to learn more about the synchronization process, my thesis about using IMAP on clients with flaky network and limited resources describes what the clients have to do in order to show an updated view of mailboxes to their users.

Best practice: Keep TCP/IP connection open or close it after each transfer?

My Server-App uses a TIdTCPServer, several Client apps use TIdTCPClients to connect to the server (all computers are in the same LAN).
Some of the clients only need to contact the server every couple of minutes, others once every second and one will do this about 20 times a second.
If I keep the connection between a Client and the Server open, I'll save the re-connect, but have to check if the connection is lost.
If I close the connection after each transfer, it has to re-connect every time, but there's no need to check if the connection is still there.
What is the best way to do this?
At which frequency of data transfers should I keep the connection open in general?
What are other advantages / disadvantages for both scenarios?
I would suggest a mix of the two. When a new connection is opened, start an idle timer for it. Whenever data is exchanged, reset the timer. If the timer elapses, close the connection (or send a command to the client asking if it wants the connection to remain open). If the connection has been closed when data needs to be sent, open a new connection and repeat. This way, less-often-used connections can be closed periodically, while more-often-used connections can stay open.
Two Cents from experiment...
My first TCP/IP client/server application was using a new connection and a new thread for each request... years ago...
Then I discovered (using ProcessExplorer) that it consummed some network resources because all closed connection are indeed not destroyed, but remain in a particular state for some time. A lot of threads were created...
I even had some connection problems with a lot of concurent requests: I didn't have enough ports on my server!
So I rewrote it, following the HTTP/1.1 scheme, and the KeepAlive feature. It's much more efficient, use a small number of threads, and ProcessExplorer likes my new server. And I never run out of port again. :)
If the client has to be shutdown, I'll use a ThreadPool to, at least, don't create a thread per client...
In short: if you can, keep your client connections alive for some minutes.
While it may be fine to connect and disconnect for an application that is active once every few minutes, the application that is communicating several times a second will see a performance boost by leaving the connection open.
Additionally, your code will be much simple if you aren't trying to constantly open, close, or diagnose an open connection. With the proper open and close logic, and SEH around your read and writes, there's no reason to test if the socket is still connected before using, just use it. It will tell you when there is a problem.
I'd lean towards keeping a single connection open in most enterprise applications. It generally will lead to cleaner code, that is easier to maintain.
/twocents
I guess it all depends on your goal and the amount of requests made on the server in a given time not to mention the available bandwidth and the hardware on the server.
You need to think for the future as well, is there any chance that in the future you will need connections to be left open? if so, then you've answered your own question.
I've implemented a chat system for a project in which ~50 people(the number is growing with each 2 months) are always connected and besides chatting it also includes data transfer, database manipulation using certain commands, etc. My implementation is keeping the connection to the server open from the application startup until the application is closed, no issues so far, however if a connection is lost for some reason it is automatically reestablished and everything continues flawlessly.
Overall I suggest you try both(keeping the connection open and closing it after it's being used) and see which fits your needs best.
Unless you are scaling to many hundreds of concurrent connections I would definitely keep it open - this is by far the better of the two options. Once you scale past hundreds into thousands of concurrent connections you may have to drop and reconnect. I have architected my entire framework around this (http://www.csinnovations.com/framework_overview.htm) since it allows me to "push" data to the client from the server whenever required. You need to write a fair bit of code to ensure that the connection is up and working (network drop-outs, timed pings, etc), but if you do this in your "framework" then your application code can be written in such a way that you can assume that the connection is always "up".
The problem is the limit of threads per application, around 1400 threads. So max 1300 clients connected at the same time +-.
When closing connections as a client the port you used will be unavailable for a while. So at high volume you’re using loads of different ports. For anything repetitive i’d keep it open.

Which strategy about connection management should we use when developing an application?

Which use of connection management is better while developing a windows based application which uses a Database as its data store? What about web-based applications?
when user loads the first form of an application, the global
connection opens and on closing the last form of the application
the connection closes and disposes.
for each form within the application, there is a local connection
(form scope) and when user wants to perform an operation like
insert, update, delete, search, ... the application uses the
connection and by unloading the form the connection also closes and
disposes.
for every operation within a form of an application, there is a
local connection (procedure scope) and when user wants to perform
an operation like insert, update, delete, search, ... the
application uses procedure connection and at the end of every
procedure within the form, the connection also closes and disposes.
Go with #3
You should try to only ever keep connections open for just as long as is required.
Also have a look at
Understanding Connection Pooling
SQL Server Connection Pooling
(ADO.NET)
Connecting to a database server
typically consists of several
time-consuming steps. A physical
channel such as a socket or a named
pipe must be established, the initial
handshake with the server must occur,
the connection string information must
be parsed, the connection must be
authenticated by the server, checks
must be run for enlisting in the
current transaction, and so on.
In practice, most applications use
only one or a few different
configurations for connections. This
means that during application
execution, many identical connections
will be repeatedly opened and closed.
To minimize the cost of opening
connections, ADO.NET uses an
optimization technique called
connection pooling.
Connection pooling reduces the number
of times that new connections must be
opened. The pooler maintains ownership
of the physical connection. It manages
connections by keeping alive a set of
active connections for each given
connection configuration. Whenever a
user calls Open on a connection, the
pooler looks for an available
connection in the pool. If a pooled
connection is available, it returns it
to the caller instead of opening a new
connection. When the application calls
Close on the connection, the pooler
returns it to the pooled set of active
connections instead of closing it.
Once the connection is returned to the
pool, it is ready to be reused on the
next Open call.
This is quite a broad question. But usually, for any database server and application environment, opening and keeping a new connection is an expensive operation. That's why you definitely don't want to open multiple connections from a single client, and should stick to process-scope for connections.
In a desktop application using a database server, strategy for handling it's single connection depends a lot on the DB usage pattern. Say, if the app reads or writes something a lot within 5 minutes, and then just does nothing with the DB for hours, it makes no sense to keep the connection open all the time (assuming there are many other clients). You may introduce some kind of time-out for closing a connection.
The Web server situation depends a lot on the used technology. Say, in PHP every request is a "fresh start" WRT database connection. You open and close a connection for each mouse click. While popular Java application servers have DB connections pool, reusing the same connection instances for many HTTP request handling threads.

difference between shareable and unshareable connection in jdbc connection pool?

We notice something strange in our struts web application which was hosted on sun app server enterprise edition 8.1.
The NumConnUsed for Monitoring of JDBC resources stays at 100 over connections even though there was relatively very low user activities.
I try to do some research and found the following links
http://j2ee-performance.blogspot.com/
http://www.ibm.com/developerworks/websphere/library/techarticles/0506_johnsen/0506_johnsen.html
"When the application closes a shareable connection, the connection is not truly closed, nor is it returned to the free pool. Rather, it remains in the Shared connection pool, ready for another request within the same LTC for a connection to the same resource."
Base on the above comments, it is true that if my web.xml resource ref scope is set to shareable, when application side close conneciton, it remains in the shared conneciton pool thus the numconnused is always so high?
If I interpret the links in my own special way (;)), the shared vs. unshared connections is based on different connections in the same page.
java.sql.Connection connectionOne = DriverManager.getConnection(...);
...
java.sql.Connection connectionTwo = DriverManager.getConnection(...);
These two, at a glance, seem to be individual - but if your AS is set to shareable connections, the second one will be created with a pointer to the first connection instead of returning a new connection. When the page finishes the connection should be sent back to the pool.
The AS is probably keeping the pool filled with connections to enhance performance.
This is not fact, only my own iterpretation of the links.

Resources