I have a service that publishes a Kafka event whenever a user attribute is updated, now to consume and process this event I have a gem that uses Karafa. Every application that boots the gem will be able to process that event (if possible). Does Karafka work with applications other than rails? In my case, the service is written in Sinatra which publishes the event and the consumer is in the gem which is using Karafka.
I'm the author of Karafka.
Karafka is a standalone framework and works with and without Rails:
Docs: https://karafka.io/docs/Integrating-with-Ruby-on-Rails-and-other-frameworks
Example apps (including a PORO app): https://github.com/karafka/example-apps
Related
I have a rails 7 app setup with Hotwire and Actioncable to broadcast database commits to update the user interface in realtime.
I have a separate app (written in Python) that I want to use to send updates to the rails app.
If I write directly to the database then I won't be able to trigger the user interface to update automatically.
I've looked into maybe using an RPC call using RabbitMQ but I'm not currently using it in my environment so it may be too much overhead.
I'm wondering - how can I do this from outside of the rails app?
Internal API endpoint
RPC call
Thanks
I would use an internal API/Webhook inside of your rails app that could accept incoming requests and then do whatever you need to do with the data. From their you could add some token based authentication to make sure only the people you want to access the API can.
Implementing Web Sockets in my app I've become confused which gem is better. I found plenty of different opportunities, however, some are hard to distinct.
Finally, I've chosen Action Cable (a Rails 5 native part) and Faye (appeared earlier and became very popular).
But now I'm stuck - is Action Cable the same thing as Faye? Which are differences (if there's any)?
From the Faye website: Faye is a publish-subscribe messaging system based on the Bayeux protocol. It provides message servers for Node.js and Ruby, and clients for use on the server and in all major web browsers.
From the ActionCable readme: Action Cable seamlessly integrates WebSockets with the rest of your Rails application. It allows for real-time features to be written in Ruby in the same style and form as the rest of your Rails application, while still being performant and scalable. It's a full-stack offering that provides both a client-side JavaScript framework and a server-side Ruby framework. You have access to your full domain model written with Active Record or your ORM of choice.
Short answer is YES, both are pub/sub messaging system.
Long answer is NO, because faye is a low-level tool and ActionCable uses faye (look here), al least some components.
But you always can get the same results using (maybe) different efforts building an application using faye or ActionCable. The big difference is Faye works as a rack-based component instead of rails-based component.
We are developing a web application and we want to add chatting facility in our site. We are working on ruby on rails and found the xmpp4r gem that deals to create, update, delete users in ejabberd server and we wanted that user will get logged in into Ejabberd server once it logged in into our website and can send messages to others. I went through xmpp4r documentation but unable to make it yet, how to do that all. Can somebody provide me a documentation for that and an example with ruby on rails? I have already configured ejabberd server.
Usually, this is not how you build a chat system for a website using XMPP.
The most common approach is as follow:
Create a Web XMPP client in Javascript.
Making sure the user database is shared between your Rails app and ejabberd (or use ReST authentication module from ejabberd to Rails app).
If you use Rails to connect to ejabberd as a proxy, you will end up in a hell trying to manage the "reactor" to runs many XMPP clients inside your Rails web application. You are not supposed to run long running "processes" into Rails. It is not designed for that (and you will get memory, scalability and responsiveness issues).
My scenario is as follows:
I have a Rails app which has an active record model for writing down system events. Those system events can come from the rails app or from a separate Ruby app. The ruby app currently publishes its events to a Redis queue.
Is there a way within the Rails app to start a Redis listener and subscribe to the queue?
As mentioned by #sergio your rails application is not the place for a redis client. A rails web application is a server side application which responds to requests from clients. That is why, what you need is a separate process (preferably a daemon) which acts as a client to your redis server.
To daemonize your redis client you can use the daemons gem. I typically place my daemons in <app-root>/lib/daemons/
You can load your complete Rails environment in your daemon process by including these lines at the beginning :
require File.dirname(__FILE__) + "/../config/application"
Rails.application.require_environment!
That way you will have access to your models and can interact with your DB through your model classes.
I am developing a system composed of two different rails applications (server and client) which communicate via rest web services.
I have tests for each app individually, but I would like to add some test for the integration between the two platforms, to assert that one creates a request compatible with what the other is expecting.
Any hints would be appreciated.
I have a similar architecture and we are using VCR in order to record all server side responses and avoid make requests always. It could turn annoying and right now I'm looking for a way to clean data from server after every request
I think VCR could be a good start point in order to test integration between your apps.
You can find documentation here -> Relish Docs
I think there could be several approaches here, depending what you have implemented..
If the client Rails app has user interface, try to write Selenium tests to perform the integration test in your local dev environment or a staging environment that has both deployed. (not ideal if the interface is still a prototype, changing frequently...)
Maybe part of the client can be written as a Ruby gem (e.g. the communication rest api is a ruby gem). When the server app in testing environment, the server Rails app can use the Client gem to run integration test, i.e. call the module function, the same function is used by client. The client Rails app can also use the gem to make requests to the server. Here's a good guide to start migrating some of your reusable code to rubygem.