Do constants stay the same for ALL users? - ruby-on-rails

I have a web app that I built. It communicates with the Salesforce API. I have users and administrators. All connections to the API use the same credentials.
I am concerned that my API connection is going to be created multiple times because each admin that is logged in has their own instance of the connection.
If I hold the API connection in a constant, do all other sessions/users have access to that exact connection or do I have to connect for each user, or how can I share one single API connection for ALL users?

A stateless API will never have a persistent connection, so there's no use in holding these in constants. Every HTTP request is a separate TCP connection by definition.
It's only things like database or Websocket connections that persist and if you need to manage those you need a connection pool, not a simple constant. If the connection ever fails it needs to be replaced, and if more than one thread potentially requires it you have to handle acquisition and locking properly.
Create your API connectors as necessary. Unless you have a measurable performance problem don't worry about it.

A Ruby constant is like a variable, except that its value is supposed to remain constant for the duration of the program. The Ruby interpreter does not actually enforce the constancy of constants, but it does issue a warning if a program changes the value of a constant.
Reference: http://rubylearning.com/satishtalim/ruby_constants.html

Related

How to use TIdHTTPSession in TIdHTTPServer?

I don't understand the idea of TIdHTTPSession in TIdHTTPServer. What is it for? Is it a kind of container for request, or what is it? How do I use it properly after I have enabled AutoSessionStart? And what will happen if I do not enable AutoSessions?
For example, say we have some shared resourse FMyMessages: TStringList; Then how should I request this shared resource with/without sessions?
TIdHTTPSession has an FLock: TIdCriticalSection; member - so maybe I should use it to lock my shared resource FMyMessages from other threads if I have AutoSessions, otherwise I should use my own critical section?
Also, how can I count Sessions in the moment? I tried like this but it doesn't work:
Server.Contexts.Count.ToString;
I don't understand the idea of TIdHTTPSession in TIdHTTPServer ? What is it for? Is it a kind of container for request, or what is it? How do I use it properly after I have enabled AutoSessionStart?
HTTP is a stateless protocol. It does not remember information from one request to the next. And it does not even guarantee or require that the TCP connection itself remain open between requests.
That is where sessions come into play. The server can create a session object to store information, such as during a client login, and that session's unique ID is sent to the client via an HTTP cookie, which the client can send back to the server on subsequent requests to reuse the same session object. Eventually, the session will timeout and be destroyed, if you do not end the session explicitly, such as during client logout.
And what will happen if I do not enable AutoSessions?
The server will simply not automatically create a new session object if one does not exist yet for each request. You would have to create a new session manually on an as-needed basis instead.
For example, say we have some shared resourse FMyMessages: TStringList; Then how should I request this shared resource with/without sessions?
Sessions have nothing to do with accessing shared resources, and everything to do with persisting per-client state data. Such as user logins, database connections, etc.
TIdHTTPSession has an FLock: TIdCriticalSection; member - so maybe I should use it to lock my shared resource FMyMessages from other threads if I have AutoSessions, otherwise I should use my own critical section?
No. You can use Indy's TIdThreadSafeStringList instead.
Also, how can I count Sessions in the moment? I tried like this but it doesn't work:
Server.Contexts.Count.ToString;
The Contexts property stores the active client TCP connections. That has nothing to do with HTTP sessions. Those are stored in the server's SessionList property instead.

SignalR connection (hub proxy) lifetime

What is best practice for connecting clients to SignalR hub? In client, is it better to keep connection (hub proxy) somewhere, or is it better to create connection (hub proxy) for each hub method call?
Per https://www.asp.net/signalr/overview/guide-to-the-api/hubs-api-guide-server#multiplehubs
There is no performance difference for multiple Hubs compared to
defining all Hub functionality in a single class.
Whether or not you use multiple hubs is simply a matter of deciding how you want to logically organize your code. Standard OOP practices apply here.
Later in the same documentation...
If you need to use the context multiple-times in a long-lived object,
get the reference once and save it rather than getting it again each
time. Getting the context once ensures that SignalR sends messages to
clients in the same sequence in which your Hub methods make client
method invocations. For a tutorial that shows how to use the SignalR
context for a Hub, see Server Broadcast with ASP.NET SignalR.
...not sure if that last bit is relevant to what you're asking, but it's good to know as you plan your signalr architecture.
The optimal way to go is to keep just a single connection for all method calls. Every new connection you open will waste network resources and processing, as SignalR needs to keep a live connection with the server for each connection. That means battery drain on mobile devices and more server workload.
[UPDATE]
After reading #alex-dresko answer I realized my answer needs some clarification.
It doesn´t matter how many proxies you create under the same connection, it won´t change performance:
hubConnection = new HubConnection(BASE_ADDRESS);
var chatProxy = hubConnection.CreateHubProxy("chatHub");
var otherProxy = hubConnection.CreateHubProxy("otherHub");
var nProxy = hubConnection.CreateHubProxy("nHub");
However, you are asking if
is it better to keep connection (hub proxy) somewhere
Well, connection is one thing (HubConnection) and the proxy is another thing.
New connections will open a new bridge between the client and the server, so creating and persisting a single connection globally in your app makes sense. Then you can reuse the very same connection to create as many proxies as you want.
You can easily test this scenario. Create a console app that creates a connection and 2 proxy hubs. Then create 2 connections and 1 hub on each one and check the signalr logs...

Thread-safe way of changing the connection search_paths

I want to be able to switch between different DB schemas in a Rails 4 app.
The plan is to add a new middleware in the very beginning of the stack that will do that for me.
The only way to do it is by setting ActiveRecord::Base.connection.schema_search_path = '"$user",my_schema'.
The problem I have with this is that this connection will go to the pool and all the following requests will use the schema that was set in the first one (basically leaking it through).
So the solution I see is to always reset the search path to what it was before and always set it on each request.
But I don't want to do this because:
99% of the requests will go to the default (public) schema, executing set search_path to '$user$,my_schema' would be additional query that could have been avoided
higher risk of leaking (other middleware may establish the connection earlier, or some changes to Rails or gems outside of my control)
All that especially applies to threaded servers, like Puma.
So are there any better alternatives to my solution with a middleware?
Thanks.
When you return connections to the pool, you must ensure the pool runs DISCARD ALL; to reset the connection state.
That will clear any SET ROLE, SET SESSION AUTHORIZATION, session variables, search_path setting, etc.

How to prevent ADODB.Connection pooling?

I'm using Powershell v2.0, question is in the title. I'm having to use the old school ADOB.Connection (not the OLEDB provider) to open a Jet DB file (.mdb). The reason is simple, the ADODB.Connection exposes properties I need access to that the OLEDB provider doesn't.
I'm opening the DB via ADOB.Connection to query for some information, and then I'm trying to compact the DB using JRO.JetEngine. The issue is that I keep getting an error about the Jet DB being locked.
I'm explicitly calling Close on it, and setting the variable to $null, and still experiencing that issue. My best guess is that ADODB.Connection is using connection pooling, and so is not releasing the resources the way it should be.
According to http://support.microsoft.com/kb/191572, the call to close() should be enough, but it doesn't seem to be working.
Is there a way for me to explicitly specify no connection pooling when creating ADODB.Connection objects?
In the link you provided, it is said that calling to close returns the connection to the pool:
2.What statement returns the connection to the pool?
2.Conn.Close
You might need to destroy/dispose the ADODB.Connection object, so that it is removed from the pool, or, if you are using OLE DB as the provider, configure the OLEDB Services, as explained here:
Enabling OLE DB Resource.
Pooling Resource pooling can be enabled in
several ways:
For an ADO-based consumer, by keeping one open instance of a
Connection object for each unique user and using the OLEDB_SERVICES
connection string attribute to enable or disable pooling. By default,
ADO attempts to use pooling, but if you do not keep at least one
instance of a Connection object open for each user, there will not be
a persistent pool available to your application. (However, Microsoft
Transaction Server keeps pools persistent as long as the connections
in them have not been released and have not eventually timed out.)

How can I update a DataSnap server while clients are still connected?

We use stateful DataSnap servers for some business logic tasks and also to provide clientdataset data.
If we have to update the server to modify a business rule, we copy the new version into a new empty folder and register it (depending on the Delphi version, just by launching or by running the TRegSvr utility).
We can do this even while the old server instance is running. However, after registering the new version, all new client connections will still use the currently running (old) server instance. All clients have to disconnect first, then the new server will be used for the next clients.
Is there a way to direct all new client connections to the new server, immediately after registering?
(I know that new or changed method signatures will also require a change and restart of the clients but this question is about internal modifications which do not affect the interface)
We are using Socket connections, and all clients share the same server application (only one application window is open). In the early days we have used a different configuration of the remote datamodule which resulted in one app window per client. Maybe this could be a solution? (because every new client will launch the currently registered executable)
Update: does Delphi XE offer some support for 'hot deployment' (of updated servers)? We use Delphi 2009 at the moment but would upgrade to XE if it offers easier implementation of 'hot deployment'.
you could separate your appserver into 2 new servers, one being a simple proxy object redirecting all methods (and optionally containing state info if any) to the second one actually implementing your business logic. you also need to implement "silent reconnect" feature within your proxy server in order not to disturb connected clients if you decide to replace business appserver any time you want. never did such design myself before but hope the idea is clear
Have you tried renaming the current server and placing the new in the same location with the correct name (versus changing the registry location). I have done this for COM libraries before with success. I am not sure if it would apply to remote launch rules through as it may look for an existing instance to attach to instead of a completely fresh server.
It may be a bit hackish but you would have the client call a method on the server indicating that a newer version is available. This would allow it to perform any necessary cleanup so it doesn't end up talking to both the existing server instance and new server instance at the same time.
There is probably not a simple answer to this question, and I suspect that you will have to modify the client. The simplest solution I can think of is to have a flag (a property or an out parameter on some commonly called method) on the server that the client checks periodically that tells the client to disconnect and reconnect (called something like ImBeingRetired).
It's also possible to write callbacks under certain circumstances for datasnap (although I've never done this). This would allow the server to inform the client that it should restart or reconnect.
The last option I can think of (that hasn't already been mentioned) would be to make the client/server stateless, so that every time the client wants something it connects, gets what it wants then disconnects.
Unfortunately none of these options are the answer you want to your question, but might give you some ideas.
(optional) set up vmware vSphere, ESX, or find a hosting service that already has one.
Store the session variables in db.
Prepare 2 web boxes with 2 distinct IP address and deploy your stuff.
Set up DNS, firewall, load balancer, or BSD vm so name "example.com" resolves to web box 1.
Deploy new version to web box 2.
Switch over to web box 2 using whatever routing method you chose.
Deploy new version to web box 1 if things look ok.
Using DNS is probably easiest, but it takes time for the mapping to propagate to the client (if the client is outside your LAN) and also two clients may see different results. Some firewalls have IP address mapping feature that you can map public IP address and internal IP address. The ideal way is to use load balancer and configure it to 50:50 and change it to 100:0 when you want to do upgrade, but it costs money. A cheaper alternative is to run software load balancer on BSD vm, but it probably requires some work.
Edit: What I meant to say is session variables, not session. You said the server is stateful. If it contains some business logic that uses session variable, it needs to get stored externally to be preserved across reconnection during switch over. Actual DataSnap session will be lost, so when you shutdown web box 1 during upgrade, the client will get "Session {some-uuid} is not found" error by web box 1, and it will reconnect to web box 2.
Also you could use 3 IP addresses (1 public and 2 private) so the client always sees 1 address , which is better method.
I have done something similar by having a specific table which held my "data version". Each time I would update the server or change a system wide global setting, I would increment this field. When a client starts it always checks this value, and will check again before any transactions/queries. If the value was ever different from when I first started, then I needed to go through my re-initialization logic, which could easily include a re-login to an updated server.
I was using IIS to publish my app servers, so the data that would change would be the path to the app server. I kept the old ones available, to respond to any existing transactions that were in play. Eventually these would be removed once I knew there were no more client connections to that version.
You could easily handle knowing what versions to keep around if you log what server the client last connected too (and therefore would know about).
For newer versions (Delphi 2010 and up), there is an interesting solution
for systems using the HTTP transport:
Implementing Failover and Load Balancing in DataSnap 2010 by Andreano Lanusse
and a related question for the TCP/IP transport:
How to direct DataSnap client connections to various DS Servers?

Resources