Good morning everyone.
I've been having a problem for some time, I have some users in my Rails application, and let's suppose that each user has a resource in their panel where they can register certain parameters in a form.
Now comes what I imagine is not possible, it would be possible by a web interface the user after registering his parameters, he would have access to a button that would start a new docker container on the same server (droplet) of the application or on another server ( droplet), and of course the image that would be raised, would be started with the parameters that the user registered in the resource.
Sounds confusing, but it's the only solution I have for users to have a live monitor type. Because within this container, requests will be triggered with certain conditions and times, depending on the parameters sent by the users.
Related
I've built an app (with flask, flask-login and dash) on GCP Cloud Run. The app allows users to login, look at some fancy dashboards and leave comments on certain pages. It works great performance-wise: instances spin up quickly for users with minimal lag, the BigQuery interface I built works great and pub/sub messages sent from user interactions do exactly what they're supposed to do.
The only issue I'm having right now is that there's something weird about which instance of a container a user connects to. What will often happen is a user will login to my app via their browser successfully, and then when navigating to another password-protected page will receive a 401 error (seemingly randomly).
My belief is that this behavior is happening because the navigation request (clicking a link to another password protected page) from the user to another password protected page spins up another Cloud Run instance. Is there any way to force Cloud Run to maintain a specific instance of my container for a given request? So that if a user logs in and then navigates GCP doesn't take the next request and decide to autoscale?
I've experimented with setting the maximum number of requests for the app's frontend container to 1 but it doesn't seem to improve this behavior which happens sporadically throughout a given user's session.
To clarify, the frontend part of the app is still usable, but it is an annoying user experience to constantly have to login again.
Any help or guidance is appreciated!
The answer was as simple as turning on session affinity per #DazWilkin 's comment.
What I did:
Went to the Cloud Run dashboard on GCP and selected the service of interest
Clicked "Edit and Deploy New Revision"
Went to the "Connections"
Checked the box next to the "Session affinity" preview feature
Clicked deploy
This ended up completely solving the problem!
Our system has two servers (S1) one is running processesing and data storage (basically DB) and the other one is a webserver (WS).
There are two types of even that can happen in the system:
User A pings User B. In this case we check if user B is logged in and we push a notification to User B client throw SignalR. It works.
Services constantly running on S1 and generating new data that concenrs multiple users. My goal is as soon as a new data important for user A is generated I immediately want to dispatch a signalR notification to user A client provided he/she is logged in.
This part 2 is not quite clear for me how to design. My thought right now is to start an indefinite process on webserves that monitors our DataBase and checks if new records are generated fpr this user and then push a SignalR message.
That would be fine, but now we have 10k users logged in and I don't think the right decision would be run 10k threads monitoring activities.
Basically, my question is what would a proper way do design signalR based notification mechanism that is based on events that are not originated on our webserver.
I would use a service bus or mq, for example this Free MQ https://www.rabbitmq.com/
You can proxy the messages direcly to the Clients using this proxy library (I'm the author).
Doc's here https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy/wiki
Demo https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy/tree/master/SignalR.EventAggregatorProxy.Demo.MVC4
You can also set up a sql dependency that triggers a message to your signalr clients,
http://techbrij.com/database-change-notifications-asp-net-signalr-sqldependency
This link is the one that I based my code on.
couple of things to watch for, the setup of the table. You cannot use 3 part table names
"SELECT [CMRID],
[SolutionID],
[CreateDT],
[ModifyDT]
**FROM [dbo].[Case]**
WHERE [ModifyDT] > " + LastExecutionDateTime;
Also, and this is very important, you MUST reset the event handler every time the dependency triggers, if not it will work the first time and then stop working.
I hope this helps you.
I don't know if this is common or something but I wanted to check. So I am building a site on an iis7 server and coming across a weird problem. Whenever I have 2 clients accessing the site it seems they are sharing info. Here is an example, when one client does a search for a particular item, the other client goes to the search page and see's the results of client's one search results. I am using a global class to store this information on my code behind.
So here is my question, my understanding of servers was that if two clients accessed the server they were running on different instances of the site, meaning that even if I have a global class in my code it would be as if two machines were running it. Am I wrong in this understanding?
Also are there settings in IIS that I need to change for this to work?
In asp.net, you can use Session variables which are unique serialized token type things stored in server memory. You can store html form info in these sessions so another page on your site can read it.
The syntax in your MVC controller action to create a Session would be:
Session["MyFormData"] = someObject;
http://msdn.microsoft.com/en-us/library/ms178581.aspx
Summary: is there a daemon that will do postbacks when a user connects/disconnects via TCP, or is it a good idea to write one?
Details:
There are a number of questions based around this already; but I believe that this is a different "twist" on it. We're writing a Ruby on Rails web application, and we would like to be able to tell if a user is "online" or "offline", where the following definitions apply:
"online" - the user's browser is open and maintaining a TCP connection to one of our servers.
"offline" - the user's browser is no longer connected to one of our servers.
What we're thinking is a convenient way of doing this is to run a completely separate "online state" server that each of our users will connect to (exactly once):
when a connection is made to the "online state" server, it will postback to our actual RoR site and let it know "this user just logged on".
when a connection is lost from the "online state" server, it will postback to our actual RoR site and let it know "this user just logged off".
This methodology seems reasonable and keeps things quite modularized (the online state server, for instance, will be quite simple, which is nice). We're able to write this online state server, but have the following questions:
Any specific problems with the above architecture that we haven't taken into account?
Is there a daemon or application out there that does this already? Why reinvent the wheel, if it has already been written?
Is there a push server out there that offers this functionality (i.e. it maintains connections to the users, but will postback or send notifications upstream to the web servers when a user connects or disconnects?)
Is this something you envisage users would install on their systems?
If you are looking for a browser based system, WebSockets are probably your only option using something like Socket.IO http://socket.io/.
The node.js socket server provided as part of this project can be found on github: http://github.com/LearnBoost/Socket.IO-node
Node.js is a great platform designed for exactly this problem domain and there are a number of WebSocket servers for node.
Unless your app is entirely ajax based and uses a single parent page, you would need to create a persistent parent frame containing the socket that wraps your application, as each time the user clicks a link the page unloads and reloads, resulting in disconnection and re-connection from the state server.
I am to build a web application which will accept different events from external sources and present them quickly to the user for further actions. I want to use Ruby on Rails for the web application. This project is a internal development project. I would prefer simple and easy to use solutions for rapid development over high reliable and complex systems.
What it should do
The user has the web application opened in his browser. Now an phone call comes is. The phone call is registered by a PBX monitoring daemon. In this case via the Asterisk Manager Interface. The daemon sends the available information (remote extension, local extension, call direction, channel status, start time, end time) somehow to the web application. Next the user receives a notified about the phone call event. The user now can work with this. For example by entering a summary or by matching the call to a customer profile.
The duration from the first event on the PBX (e.g. the creation of a new channel) to the popup notification in the browser should be short. Given a fast network I would like to be within two seconds. The single pieces of information about an event are created asynchronously. The local extension may be supplied separate from the remote extension. The user can enter a summary before the call has ended. The end time, new status etc. will show up on the interface as soon as one party has hung up.
The PBX monitor is just one data source. There will be more monitors like email or a request via a web form. The monitoring daemons will not necessarily run on the same host as the database or web server. I do not image the application will serve thousands of logged in users or concurrent requests soon. But from the design 200 users with maybe about the same number of events per minute should not be a scalability issue.
How should I do?
I am interested to know how you would design such an application. What technologies would you suggest? How do the daemons communicate their information? When and by whom is the data about an event stored into the main database? How does the user get notified? Should the browser receive a complete dataset on behalf of a daemon or just a short note that new data is available? Which JS library to use and how to create the necessary code on the server side?
On my research I came across a lot of possibilities: Message brokers, queue services, some rails background task solutions, HTTP Push services, XMPP and so on. Some products I am going to look into: ActiveMQ, Starling and Workling, Juggernaut and Bosh.
Maybe I am aiming too hight? If there is a simpler or easier way, like just using the XML or JSON interface of Rails, I would like to read this even more.
I hope the text is not too long :)
Thanks.
If you want to skip Java and Flash, perhaps it makes sense to use a technology in the Comet family to do the push from the server to the browser?
http://en.wikipedia.org/wiki/Comet_%28programming%29
For the sake of simplicity, for notifications from daemons to the Web browser, I'd leave Rails in the middle, create a RESTful interface to that Rails application, and have all of the daemons report to it. Then in your daemons you can do something as simple as use curl or libcurl to post the notifications. The Rails app would then be responsible for collecting the incoming notifications from the various sources and reporting them to the browser, either via JavaScript using a Comet solution or via some kind of fatter client implemented using Flash or Java.
You could approach this a number of ways but my only comment would be: Push, don't pull. For low latency it's not only quicker it's more efficient, as your server now doesn't have to handle n*clients once a second polling the db/queue. ActiveMQ is OK, but Starling will probably serve you better if you're not looking for insane levels of persistence.
You'll almost certainly end up using Flash on the client side (Juggernaut uses it last time I checked) or Java. This may be an issue for your clients (if they don't have Flash/Java installed) but for most people it's not an issue; still, a fallback mechanism onto a pull notification system might be prudent to implement.
Perhaps http://goldfishserver.com might be of some use to you. It provides a simple API to allow push notifications to your web pages. In short, when your data updates, send it (some payload data) to the Goldfish servers and your client browsers will be notified, with the same data.
Disclaimer: I am a developer working on goldfish.
The problem
There is an event - either external (or perhaps internally within your app).
Users should be notified.
One solution
I am myself facing this problem. I haven't solved it yet, but this is how I intend to do it. It may help you too:
(A) The app must learn about the event (via an exposed end point)
Expose an end point by which you app can be notified about external events.
When the end point is hit (and after authentication then users need to be notified).
(B) Notification
You can notify the user directly by changing the DOM on the current web page they are on.
You can notify users by using the Push API (but you need to make sure your browsers can target that).
All of these notification features should be able to be handled via Action Cable: (i) either by updating the DOM to notify you when a phone call comes in, or (ii) via a push notification that pops up in your browser.
Summary: use Action Cable.
(Also: why use an external service like Pusher, when you have ActionCable at your disposal? Some people say scalability, and infrastructure management. But I do not know enough to comment on these issues. )