I assume that if you mount an app in a main rails or use a rails engine. If any of the sub-apps fail/crash than that means the main app and all the sub-app fail/crash.
Does anyone know this for sure? I am wondering if I am building a system if I should separate my architecture into multiple standalone application and instances or build Engine/mountable apps instead of a larger app. The worry is what would happen if part of the app ecosystem goes don't how does it effect the rest of the applications.
Thanks
No, it won't crash the main app.
Related
I have multiple electron apps. One is the main app and several other feature apps. There are few buttons on the main app which will cause feature apps to open. The problem here is every app has a main process that is causing more CPU to be utilized. Is it possible to use only one main process and share the renderer process.
That's exactly what it's designed for, take a look at this repo:
https://github.com/electron/electron-api-demos/tree/master/renderer-process/windows
Depends. If you have a single electron application, then you can have it displaying multiple browser windows, each with its own render process. But we can't separate main process and the render processes into different 'executables' and connect them.
I have one electron application running that 'hosts' several apps and the main app is a launcher that allows me to start them (it's a tray application). So all webapps are connected to the same main process.
I inherited a legacy app running Thinking Sphinx v3. I've been working on a large update for it, upgrading rails, etc.
My updated app now has a different Thinking Sphinx index, but it shares the same schema. It also uses delta indexing with Delayed Job.
I have a beta environment fully up and running but I now want to point the beta app to the production database so my colleagues can test the update safe in the knowledge that if anything goes awry they can always fallback to the live app.
Is it possible for these two environments to co-exist? How should I be configuring my app or the database server?
It's generally possible to have two apps pointing to the same database, yes. Of course, there may be behaviours in one that impact the other, so you'd want to consider that complication!
With regards to Thinking Sphinx: each app's daemon and index data will be separate from the other, so that won't be a problem either. If you're running both apps on the same machine, though, you'll want to make sure the daemons are using different ports through the mysql41 setting.
I'm a bit overwhelmed by mere amount a possible solutions the Rails community has created for my problem. So perhaps anyone can help me to figure out how to solve it best.
What I want to do is to write a Rails app that behaves kind of "dropbox". On the one hand it should be a web interface where I can upload and download files to my web server. This interacts with my database and all that stuff. On the other hand I have SSH access to that server and can put files there manually. Now I want this file system actions to trigger my Rails app to do the things it would do if I'd created the file via the web interface.
So I somehow write a daemon, right? There are a lot of solutions, like
daemons.rubyforge.org/
github.com/mirasrael/daemons-rails
github.com/costan/daemonz
github.com/kennethkalmer/daemon-kit
Another feature that I would like to have, is that my Rails app automatically spawns and stops my daemon as start or quit my Rails app resp. So "daemonz" seems the best solution. But as I googled further I found
github.com/FooBarWidget/daemon_controller/
which seems a lot more "high tech" and already used as I deploy with passenger. But I don't understand if it kills my daemons as I quit Rails. I suppose that is not the case and so I wonder how to implement this in my app.
The way to implement a "thing" to react to file system changes seems straight forward for me. I'd use
github.com/guard/listen/
(an alternative would be: github.com/ttilley/fssm )
But what I don't understand as this the first time I'm really faced with this protocol things is, if this spawns a server I'm able to communicate with or what kind of object I have to deal with.
The last thing, I would like to implement is a kind of worker queue so that the listening for file system changes is seperated from the the actions of my rails app. But there are so many solutions that I'm totally overwhelmed to pick one:
github.com/tobi/delayed_job/
github.com/defunkt/resque
http://backgroundrb.rubyforge.org/
And what is
http://godrb.com/
all about? How could that help me?
Has anyone hints how to solve this? Thanks a lot!
Jan
P.S. I'd like to post links to all the github projects but unfortunately I don't have enough 'reputation'
I'd definitely look into creating a process (daemon) that monitors the relevant directory. Then your Rails app can just put files into it without having to know anything about the back end, and it'll work with SSH too.
Your daemon can load the Rails environment & communicate with your database. I'd leave all the communication between them at that level.
As for making it start/stop with your rails app...are you sure? I use god (the ruby gem) to start/monitor processes. It will "daemonize" your Ruby app for you, too. If you want to, you can actually tell god to stop your directory-monitor process & then exit when Rails stops. And you can fire off god from a Rails initializer.
However, if you might find yourself using SSH or some other means to put files into that directory when rails is not running, you might look into putting a script into /etc/init.d to automatically start god when the server boots up.
HTH
I think you want something like Guard for monitoring the changes on the filesystem and performing actions when changes occur.
As for god, you should definitely look into it. It will make starting/stopping processes you depend on considerably easier. We used Bluepill for a while, but there are so many bugs, we ditched it and moved to God, which IMHO is a lot more pleasant to work with, for the mostpart.
Have you tried creating a script file eg:
startDaemon.rb
And then placing it:
config/initializers/
?
I have two grails (1.2.1) applications deployed in two different app servers. One app contains the main site (view, controller, domain, etc.) and the other app has a Quartz plugin that performs the core and backend processing. Both app is sharing the Domain classes and same DataSource config. This means that the two app is accessing same database and tables.
My question is: are there any penalty on querying the database?
I'm just noticing some slowness on the main site app if the Quartz job app is running. No clear proof or stats though. Can the hibernate component on each app able to handle concurrency and transactions properly in the some event? Or do I need to configure something in grails-app/conf on each app too? right now, I didn't add extra configuration though.
Thanks.
The main problem I could think of would be issues with 2nd level caching. If both apps try and cache data it can cause StaleObjectExceptions and similar when the caches get out of sync with the DB due to it being changed by the other app. 2nd level caching is disabled by default though so you might not have an issue there.
It also depends on whether you are using the optimistic locking provided by default or explicit locks with the lock() method on your domain classes. Optimistic locking should not cause a slow down (but could cause exceptions on save if the other app has updated the row).
Have you considered an architecture where one application masters your domain classes and the other integrates with that through messages or web services calls? In so doing, you might avoid some of the problems associated with the duplication across your applications.
Is there some way to change Rails environments mid-way through a test? Or, alternately, what would be the right way to set up a test suite that can start up Rails in one environment, run the first half of my test in it, then restart Rails in another environment to finish the test? The two environments have separate databases.
Some necessary context: I'm writing a Rails plugin that allows multiple installations of a Rails app to communicate with each other with user assistance, so that a user without Internet access can still use the app. They'll run a local version of an app, and upload their work to the online app by saving a file to a thumbdrive and taking it to an Internet cafe.
The plugin adds two special environments to Rails: "offline-production" and "offline-test". I want to write functional tests that involve both the "test" and "offline-test" environments, to represent the main online version of the app and the local offline version of the app respectively.
Edit: I've been reading up on Rack::Test, and it seems like it might be the way to go, because it places the testing framework outside of rails itself. But I still have no idea how I can use it to do a test that involves more than one environment. Any ideas, anyone?
Maybe think about the issue from the perspective of having more than one db connection, instead of having more than one environment? You can't really switch environments part way through, at least without a lot of hacking and screwing things up. :)
http://anandmuranal.wordpress.com/2007/08/23/multiple-database-connection-in-rails/