Ruby: How to use console commands of one app into another? - ruby-on-rails

I have one ruby app and one plugin. Each, within their own workspaces, have some console commands independent of the other. I wanted to use the commands of the plugin by somehow instantiating the plugin within the original app's workspace. The following example explains my requirements. Some guidance on how to achieve this would be appreciated.
cd main_app
main_app -h
The following are main_app commands:
-a # does apple for main app
-b # does basket for main app
-set_ws # sets/ enables the work space of specified plugin (need to implement this).
cd ../plugin_app
plugin_app -h
The following are plugin_app commands:
-c # does cat for plugin_app
-d # does dog for plugin_app
I would like to implement something of this sort:
cd main_app
main_app -set_ws plugin_app
main_app -h
The following are main_app commands:
-a # does apple for main app
-b # does basket for main app
-set_ws # sets/ enables the work space of specified plugin.
The following are plugin_app commands:
-c # does cat for plugin_app
-d # does dog for plugin_app

Since we don't know anything about your plugin, there is no way we can tell you how to integrate the two.
If the plugin was properly implemented to be integrated into another application, I guess you could instantiate some CLI class, or even instantiate a service class, and call the proper API.
The only coding I can suggest, given your input (and assuming the ruby app at least knows where the plugin is) - the ruby app can simply call the plugin CLI from a shell using Backticks:
if args == ['-c']
`../plugin_app -c`
end

Related

Redmine plugin creation - Could not find generator 'redmine_plugin'

I run Redmine 3.4 with Rails (5.2.0) and Docker 18.03.1-ce on Ubuntu 16.04 Xenial (which is new for me), following this GitHub repository: https://github.com/sameersbn/docker-redmine
I create my Rails app in the same folder where the docker-compose.yml has been created, and cd to it.
Then I have the exact same problem than described in this Redmine post (http://www.redmine.org/boards/3/topics/48309?r=48507#message-48507): when I try the command rails generate redmine_plugin Plug_test, this two error messages appear:
Running via Spring preloader in process 32109
Could not find generator 'redmine_plugin'
So I try the commands that Keith suggested, and running the generate command again, the Spring error message disappear, but the generate command still doesn't work (Could not find generator 'redmine_plugin').
Any idea what to do? I don't know if I'm going in the right direction.
Thanks a lot for your help.
well simple problem, you're running command from outside of your redmine app directory, you need to go into your redmine app directory, after that you can run rails generate redmine_plugin Plugin_test from there
As Ravi mentioned above, you need to go into your redmine app directory instead of your rails app directory.
Or, maybe you can exec plugin generate command via docker run command.
# e.g. In case plugin name is “myplugin"
docker run --name=redmine -it --rm \
--volume=/srv/docker/redmine/redmine:/home/redmine/data \
sameersbn/redmine:3.4.4-2 \
app:rails generate redmine_plugin myplugin
If this works fine, plugin directory named “myplugin” will be generated under /srv/docker/redmine/redmine/plugins/ directory.
Personally, I think, you had batter not use docker to create and development Redmine’s plugin, especially if you are not familiar with Redmine and Docker so much.
I hope this would be any help.

Best way to use rails console with cloud foundry

We've been experimenting with CF over Heroku and running into some issues. One of them deals with accessing the rails console in a CF AI. We're using Pivotal's PWS and have tried a number of things, including:
cd app; export HOME=$(pwd); source .profile.d/0_ruby.sh; rails c
and
cd app; export HOME=$(pwd); source .profile.d/*.sh; rails c
Both of which are hit or miss and typically don't work.
It seems a bit ridiculous that it's THIS much work to access the rails console via CF. I feel like there has to be a better, faster way.
Does anyone have any tips?
For anyone saying we should cf ssh in, here is what happens:
vcap#2f4663e4-f876-490c-65e2-a498:~$ cd app
vcap#2f4663e4-f876-490c-65e2-a498:~/app$ ls .profile.d/000_multi-supply.sh 0_ruby.sh
vcap#2f4663e4-f876-490c-65e2-a498:~/app$ source .profile.d/0_ruby.sh
vcap#2f4663e4-f876-490c-65e2-a498:~/app$ cd ..
vcap#2f4663e4-f876-490c-65e2-a498:~$ rails c
bash: rails: command not found
vcap#2f4663e4-f876-490c-65e2-a498:~$ source app/.profile.d/000_multisupply.sh
vcap#2f4663e4-f876-490c-65e2-a498:~$ rails c
bash: rails: command not found
As of writing this, to fire up a Rails console run cf ssh my-app -t -c "/tmp/lifecycle/launcher /home/vcap/app 'rails c' ''".
This will SSH into the container and use the lifecycle launcher, which sets up the environment for you, to execute the command.

How to change the screenshot path in Rails 5.1 system test

Using Rails 5.1.2
Creating a system test and using the take_screenshot method.
How do i change the location these screenshots are created at?
Looks like the image path is hardcoded in, so you won't be able to change it currently. Probably wouldn't be too difficult to change if you wanted to open an issue over there or create a pull request for them.
If you want to do this on CI, here's the solution I came up with. In my setup I already had a "test-runner.sh" script, with the rspec invocation at the end. There's probably also some sort of after_script: setting in the yml config also available, but I didn't look into it.
rspec ......
status=$?
# /tmp/test-results is where CircleCI looks for "artifacts" which it makes
# available for download after a test run
[ -d "tmp/screenshots" ] && cp -a tmp/screenshots /tmp/test-results/
exit $status

Bash Script to run three different rails apps on the local server?

I have three apps that I want to run with the rails server at the same time, and I also want the option to kill all the servers from one location.
I don't have much experience with Bash so I'm not sure what command I would use to launch the server for a specific app. Since the script won't be in the app directory plain rails s won't work.
From there, I suppose if I can gather the PIDs of the processes the three servers are running on, I can have the script prompt for user input and whenever something is entered kill the three processes. I'm just unsure of how to get the PIDs.
Additionally, each app has a few environment variables that I wanted to have different values than those assigned in the apps config files. Previously, I was using export var=value before rails s, but I'm not sure how to guarantee each separate process is getting the right variables.
Any help is much appreciated!
The Script
You could try something like the following:
#!/bin/bash
case "$1" in
start)
pushd app/directory
(export FOO=bar; rails s ...; echo $! > pid1)
(export FOO=bar; rails s ...; echo $! > pid2)
(export FOO=bar; rails s ...; echo $! > pid3)
popd
;;
stop)
kill $(cat pid1)
kill $(cat pid2)
kill $(cat pid3)
rm pid1 pid2 pid3
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
;;
esac
exit 0
Save this script into a file such as script.sh and chmod +x script.sh. You'd start the servers with a ./script.sh start, and you can kill them all with a ./script.sh stop. You'll need to fill in all the details in the three lines that startup the servers.
Explanation
First is the pushd: this will change the directory to where your apps live. The popd after the three startup lines will return you back to the location where the script lives. The parentheses around the (export blah blah) create a subshell so the environment variables that you set inside the parentheses, via export, shouldn't exist outside of the parentheses. Additionally, if your three apps live in different directories, you could put a cd inside each of the three parantheses to move to the app's directory before the rails s. The lines would then look something like: export FOO=bar; cd app1/directory; rails s ...; echo $! > pid1. Don't forget that semicolon after the cd command! In this case, you can also remove the pushd and popd lines.
In Bash, $! is the process ID of the last command. We echo that and redirect (with >) to a file called pid1 (or pid2 or pid3). Later, when we want to kill the servers, we run kill $(cat pid1). The $(...) runs a command and returns the output inline. Since the pid files only contain the process ID, cat pid1 will just return the process ID number, which is then passed to kill. We also delete the pid files after we've killed the servers.
Disclaimer
This script could use some more work in terms of error checking and configuration, and I haven't tested it, but it should work. At the very least, it should give you a good starting point for writing your own script.
Additional Info
My favorite bash resource is the Advanced Bash-Scripting Guide. Bash is actually a fairly powerful language with some neat features. I definitely recommend learning how bash works!
Why don't you try capistrano, framework for executing commands in parallel on multiple remote machines, via SSH. Its has lots of recipes to do this.
You are probably better off setting up pow.cx, which would run each server as it's needed, rather than having to spin up and shut down servers manually.
You could use Foreman to run, monitor, and manage your processes.
I realize I'm late to the party here, but after searching the internet for a good solution to this (and finding this page but few others and none with a full solution) and after trying unsuccessfully to get prax working, I decided to write my own solution to this problem and give it back to the community!
Check out my rdev bash script gist - a bash script you put in your ~/bin directory. This will create a new tab in gnome-terminal for each rails app with the app name and port in the tab's title. It verifies the app launched successfully by checking the port is in use and the process is actually running. It also verifies the rails app shutdown is successful by ensuring the port is no longer in use and the process is no longer running.
Setup is super easy, just change these two config values:
# collection of rails apps you want to start in development (should match directory name of rails project)
# note: the first app in the collection will receive port 3000, the second 3001 and so on
#
rails_apps=(app1 app2 app3 etc)
#
# The root directory of your rails projects (~/ is assumed, do not include)
#
projects_root="ruby/projects/root/path"
With this script you can start all your rails apps in one command or stop them all and you can stop, start and restart individual rails apps as well. While the OP requested 3 apps run, this will allow you to run as many as you need with port being assigned in order starting with 3000 for the first app in the list. Each app is started using the proper ruby version thanks to chruby and the .env is sourced on the way up so your app will have everything it needs. Once you are done developing just rdev stop and all your rails apps will be killed and the terminal windows closed.
# Usage Examples:
#
# Show Help
# ~/> rdev
# Usage: rdev {start|stop|restart} [app port]
#
# start all rails apps
# ~/> rdev start
#
# start a single rails app
# ~/> rdev start app port
#
# stop all rails apps
# ~/> rdev stop
#
# stop a single rails app
# ~/> rdev stop app port
#
# restart a single rails app
# ~/> rdev restart app port
For the record, all testing was done on Ubuntu 18.04. This script requires: bash, chruby, gnome-terminal, lsof and takes advantage of the BASH_POST_RC trick.

Optimizing Rails loading for maintenance scripts

I wrote a script that does maintenance tasks for a rails application. The script uses a class that uses models defined in the application. Just an example, let's say application defines model User, and my class (used within the script), sends messages to it, like User.find id.
I am looking for ways to optimize this script, because right now it has to load the application environment: require '../config/environment'. This takes ~15 seconds.
Had the script not use application codebase to do its job, I could have replaced model abstractions with raw SQL. But unfortunatly I can't do that because I would have to repeat the code in the script that is already present in the codebase. Not only would this violate DRY principle and require alot of work, the script would not be very maintainable, in case the model methods that I am using change.
I would like to hear ideas how to approach this problem. The script is not run from the application itself, but from the shell (with Capistrano for instance).
I hope I've described the problem clear enough. Thank you.
Could you write a little daemon that is in a read on a pipe (or named fifo, or unix domain socket, or, with more complexity, a tcp port) that accepts 'commands' that would be run on your database?
#!/usr/bin/ruby
require '../config/environment'
while (true) do
File.open("/tmp/fifo", "r") do |f|
f.each_line do |line|
case line
when "cleanup" then puts "clean!"
when "publish" then puts "published!"
else puts "invalid command, ignoring"
end
end
end
end
You could start this thing up with vixie cron's #reboot specifier, or you could run it via capistrano commands, or run it out of init or init scripts. Then you write your capistrano rules (that you have now) to simply echo commands into the fifo:
First,
mkfifo /tmp/fifo
In one terminal:
$ ./env.rb
In another terminal:
$ echo -n "cleanup" > /tmp/fifo
$ echo -n "publish" > /tmp/fifo
$ echo -n "go away" > /tmp/fifo
The output in the first terminal looks like this:
clean!
published!
invalid command, ignoring
You could make the matching as friendly (perhaps allow plain echo, rather than require echo -n as my example does) or unfriendly as you want. And the commands that get run can of course call into your model files to do their work.
Please make sure you choose a good location for your fifo -- /tmp/ is probably a bad place, as many distributions clear it on reboot. Also make sure you set the fifo owner and permission (chown and chmod) appropriately for your application -- you might not want to allow your Firefox's flash plugin to write to this file and command your database.

Resources