There are ways to find out how many users/connections are there for action cable as asked here previously How do I find out who is connected to ActionCable? and ActionCable - how to display number of connected users?.
I switched to anycable for better performance and it is working in my staging env but I don't see any pubsub keys in redis, is there a way to look into the ws server details on active connections, memory consumptions etc.
Related
I'm looking at moving my app's deployment to Heroku, and I'd like to determine if it can correctly run there on the basic plan before putting in the effort to migrate. The basic plan limits Redis to 20 connections.
I don't fundamentally understand the Rails/Redis connection architecture. Is there a single connection to Actioncable, which is then distributing the data, or is the connection per actual client (i.e. one connection for every browser tab)?
As per the docs,
An individual user will create one consumer-connection pair per browser tab, window, or device they have open.
ActionCable lets you identify a connection using a connection identifier, typically a global object called current_user in most cases. With this approach, you can later retrieve all open connections by a given user (and potentially disconnect them all if the user is deleted or unauthorized or have too many connections open).
Also, note that ActionCable uses a worker pool to run connection callbacks and channel actions in isolation from your server's main thread.
We use Mongodb as the central Database for our application; a consumer facing mobile app. At present its a 7-member replica-set with replica-set-1 being the master at the moment. The backend which connects to the mongo replica is build in Ruby on Rails and we use mongoid as the ODM.
There are mainly 3 pieces connecting to the MongoDB replica-set.
The consumer application
The Admin and customer care management application
The Data retrieval application ( for analytics and such purposes )
All these 3 apps connect to the same replica set as of now.
What I would like to know is whether is it possible to connect different applications to specific replicas.
For example, the mobile app connects to the primary for writes and the replicas 2-4
to read; the customer care management application connects to the primary
( for writes ) and replicas 5-7 for reads.
I dont think explicitly mentioning specific replicas in the mongoid.yml configuration is working. Even though I have already mentioned only replica-set-7 in the mongoid hosts file for the data retrieval application, I do see certain queries in the log file of replica-set-2 and 3.
So obviously, MongoDB decides the criteria to distribute the queries among its replicas despite the configuration specified at the client mongoid end.
I would really love to know if such a thing is possible at all using MongoDb and mongoid as it would help us solve a lot of our load issues. Right now heavy queries from the customer care and data retrieval apps affects the consumer facing mobile app as well; as the reads are not segragated. So basically would like to separate out the reads.
Also, if at all this is possible, I would again have my eyes raised on any possible pitfalls for this; specially that all 3 applications can write to the DB. For example, replica-3 suddenly becomes the primary after an election and its not explicitly mentioned in the configuration of the data retrieval application. What might happen there would become a concern.
I am not at all sure whether this is possible; but just wanted to know if theres a way to figure out this. Any help would be really appreciable.
When you connect to any member of a replica set, the client is told the full state of the replica set and can connect to any of them. The initial set of hosts are just the seeds for that process - as long as your application can reach one of those hosts, it doesn't matter which hosts are in that configuration.
Mongo does have the concept of tagged replica set members. When creating a connection or executing a query you can specify the tags to use to select the replica set member to read from.
I have a Raspberry PI that is tightly coupled with a device that I want to control.
The desired setup I want to have would look something like this:
The physical device with interactive hardware controls on the device (speaker, mic, buttons)
A Raspberry PI coupled to the device
On the PI:
A daemon app that reacts to changes from the hardware
A Webinterface that shows the current state of the device and allows to configure the device
The system should somehow be able to update itself with new software when it becomes available (apg-get or some other mechnism).
For the Webinterface I am going to use a rails app, which is not a problem as such. What is not clear to me is the event-driven software that is talking to the hardware through gpio. Firstly, I would prefer to do this using ruby, so that I don't have a big technology gap when developing the solution.
How can I ensure that both apps start up and run in the background when the raspberry PI starts
How do I notify the webapp of an event (e.g. a button was pressed).
I wonder if it makes sense that the two pieces of software have a shared database to communicate.
How to best setup some auto-update-mechanism for both pieces of software without requiring the user to take any actions.
Apps
This will be dependent on the operating system
If you install a lightweight version of Linux, you might be able to create some runtime applications or something. I've never done anything like this; but I know from Windows you can create startup programs -- likewise, you should be able to do something similar in Linux
BTW you wouldn't "run" the Rails app - you'll fire up the server to capture any requests. You'd basically run your app locally in "production" mode - allowing you to send requests, either through localhost, or setup a pseudo domain in the HOSTS file of your box
--
Web App
The web app itself is RESTful, meaning (I believe), it will only act upon having requests sent to it. Because this works over the HTTP protocol, it essentially means you'll need some sort of (web) service to send requests to the web app:
Representational state transfer (REST) is a way to create, read,
update or delete information on a server using simple HTTP calls
Although I've never done this myself, I would use the ruby app on your PI to send HTTP requests to your Rails app. This will certainly add a level of complexity, but will ensure you an interface the two types of data-transfer
The difference you have is Rails / any other web app will only act on request. "Native" applications will run as long as the operating system is operating; meaning you can "listen" for updates from the hardware etc.
What I would do is split the functionality:
Hardware input > send to service
Service > sends to Rails
Rails > sends response to service
Service > processes response
This may seem inefficient, but I think it's the best way to capture local-based input from your hardware. You'll have to use a localhost rails app, running with something like nginx or some other efficient server
--
Database
it would only make sense if they shared the data. You should remember that a database is different than a datatable. A database stores many tables, and is generally meant for a single purpose; whilst a datatable stores a single type of data.
From what you've written, I would recommend using two databases running on the same db server. This will give you the ability to create as many tables as you want for these databases - giving you scope to add as many different pieces of data you wish to each. Sharing data can be done using an API or a web service
--
Updating
Rails app will not need to be "updated" - you'll just need to deploy a fresh version. The beauty of Internet-centric software :)
In terms of your Rasberry-PI "on-board" software update - I don't have much experience with this, so can only recommend
For my portfolio software I have been using fetchmail to read from a Google email account over IMAP and life has been great. Thanks to the miracle of idle connection supported by imap3, my triggers fire in near-realtime due to server push, much sooner than periodic polling would allow otherwise.
In my basic .fetchmailrc setup, in which a brokerage customer's account emails trade notifications to a dedicated Gmail/Google Apps box, I've had
poll imap.gmail.com proto imap user "youraddress#yourdomain-OR-gmail.com" pass "yoMama" keep nofetchall ssl idle mimedecode limit 29000 no rewrite mda "myCustomSpecialMDAhandler.sh %F %T"
Trouble is, now I need to support reading from multiple email boxes, and hand off the emails to other specialized MDA scripts I wrote. No problem, just add more poll lines to .fetchmailrc, right? Well that doesn't work when the other accounts also use imap.gmail.com. What ends up happening is that while one account reads fine (and not necessary the first one listed, though usually yes), the other is getting "socket error" all day and the emails remain untouched, unread. I can't figure out why and not even sure if it's some mechanism at imap.gmail.com or not, eg. limiting to one IMAP connection from a host. That doesn't seem right since I have kept IMAP connections to many separate Gmail & Google Apps accounts from the same client for years (like Thunderbird) and never noticed this exclusivity problem.
I haven't tried launching multiple fetchmail daemons using separate -f config files (assuming they wouldn't conflict), or deploying one or more getmail and other similar email fetchers in addition. Still trying to avoid that kind of mess--unscaleable the more boxes I have to monitor.
Don't have the reference offhand but somewhere in fetchmail's docs I recall reading that idle is not so much an imap feature as a fetchmail optional trick, which has a (nasty for me) side effect of choking off all other defined accounts from polling until the connection is cut off by some external event or timeout. So at least that would vindicate Google.
Credit to Carl's Whine Rack blog for some tips.
For now I use killall fetchmail; fetchmail -f fetcher.$[$RANDOM % $numaccounts].rc periodically from crontab to cycle reading accounts each defined individually in fetcher.1.rc, fetcher.2.rc, etc. Acceptable while email events are relatively infrequent.
I want to know which is the best architecture to adopt for this case :
I have many shops that connect to a web application developed using Ruby on Rails.
internet is not reachable all the time
The solution was to develop an offline system which requires installing a local copy of the distant database.
All this wad already developed.
Now what I want to do :
Work always on the local copy of the database.
Any change on the local database should be synchronized with distant database.
All the local copies should have the same data in other local copies.
To resolve this problem I thought about using a JMS like software eventually Rabbit MQ.
This consists on pushing any sql request into a JMS queue that will be executed on the distant instance of the application which will insert into the distant DB and push the insert or SQL statement into another queue that will be read by all the local instances. This seems complicated and should slow down the application.
Is there a design or recommendation that I must apply to resolve this kind of problem ?
You can do that but essentially you are developing your own replication engine. Those things can be a bit tricky to get right (what happens if m1 and m3 are executed on replica r1, but m2 isn't?) I wouldn't want to develop something like that unless you are sure you have the resources to make it work.
I would look into existing off-the shelf replication solution. If you are already using a SQL DB it probably has some support for it. Look here for more details if you are using MySQL
Alternatively, if you are willing to explore other backends, I heard that CouchDB has great support for replication. I also heard of people using git libraries to do that sort of thing.
Update: After your comment, I realize you already use MySql replication and are looking for solution for re-syncing the databases after being offline.
Even in that case RabbitMQ doesn't help you at all since it requires constant connection to work, so you are back to square one. Easiest solution would be to just write all the changes (SQL commands) into a text file at a remote location, then when you get connection back copy that file (scp, ftp, emaill or whatever) to master server, run all the commands there and then just resync all the replicas.
Depending on your specific project you may also need to make sure there are no conflicts when running commands from different remote location but there is no general technical solution to this. Again, depending on the project, you may want to cancel one of the transactions, notify the users that it happened and so on.
I would recommend taking a look at CouchDB. It's a non-SQL database that does exactly what you are describing automatically. It's used especially in phone applications that often don't have internet or data connectivity. The idea is that you have a local copy of a CouchDB database and one or more remote CouchDB databases. The CouchDB server then takes care of teh replication of the distributed systems and you always work off your local database. This approach is nice because you don't have to build your own distributed replication engine. For more details I would take a look at the 'Distributed Updates and Replication' section of their documentation.