Currently I am using system command wget to download the video and store in our server:
system("cd #{RAILS_ROOT}/public/users/familyvideos/thumbs && wget -O sxyz1.mp4 #{video.download_url}")
but it's saying:
cd: 1: can't cd to /var/home/web/***.com/public/users/familyvideos/thumbs
Anybody has any idea? Also please provide alternative option to do this.
A much better solution would be to use open-uri for this.
http://www.ruby-doc.org/stdlib/libdoc/open-uri/rdoc/
This snippet should do the trick:
url = video.download_url
new_file_path = "#{Rails.root.to_s}/public/users/familyvideos/thumbs/your_video.mp4"
open(new_file_path, "wb") do |file|
file.print open(url).read
end
It's the cd program complaining, and that's low-level operating system stuff, so the most likely reason is that one or more of the folders after #{RAILS_ROOT}/public doesn't yet exist, or there's a space or some other character in the file path causing problems. Try surrounding the directory name in quotes.
`cd "#{RAILS_ROOT}/public/users/familyvideos/thumbs" && wget -O sxyz1.mp4 #{video.download_url}`
As for better ways, wget is pretty tried and tested so there's nothing wrong with using it, since this is what it was designed for. You could just give it the folder and the filename at the same time instead of cd'ing to it first, however.
`wget -O "#{RAILS_ROOT}/public/users/familyvideos/thumbs/sxyz1.mp4" #{video.download_url}`
Note also I'm using the backtick method of execution instead of system so you're not dealing with escaping double quotes. You could also use %x{wget ...}.
Related
I am trying to determine my testcoverage. To do this I compile my program with a newer version of gcc:
CC=/usr/local/gcc8/bin/gcc FC=/usr/local/gcc8/bin/gfortran ./configure.sh -external cmake -d
After compiling this with the --coverage option I run my tests and this creates *.gcda, *.gcno and *.o.provides.build files. And if I run something like:
> $ /usr/local/gcc8/bin/gcov slab_dim.f90.gcda [±develop ●]
File '/Local/tmp/fleur/cdn/slab_dim.f90'
Lines executed:0.00% of 17
Creating 'slab_dim.f90.gcov'
Which shows me, that gcov runs fine. However if I try to run lcov on these results:
lcov -t "result" -o ex_test.info -c -d CMakeFiles/
I get error messages like these for every file:
Processing fleur.dir/hybrid/gen_wavf.F90.gcda
/Local/tmp/fleur/build.debug/CMakeFiles/fleur.dir/hybrid/gen_wavf.F90.gcno:version 'A82*', prefer '408R'
/Local/tmp/fleur/build.debug/CMakeFiles/fleur.dir/hybrid/gen_wavf.F90.gcno:no functions found
geninfo: WARNING: gcov did not create any files for /Local/tmp/fleur/build.debug/CMakeFiles/fleur.dir/hybrid/gen_wavf.F90.gcda!
This is the same error message I get when I use the systems standard /usr/bin/gcov
This leads me to believe that lcov calls the old gcov rather than the new one. How do I force gcov to use the new version?
The simplest solution I found was to run /usr/bin/gcov-8 instead of /usr/bin/gcov.
The $PATH environment variable needs to be to extended by /usr/local/gcc8/bin/
The source of the error is clear, from the fact that you get the same result by using /usr/bin/gcov. /usr/bin/gcov should be a link to a binary from the installed compiler, but in your case the link doesn't point to a binary within gcc 8.2 installation.
You can delete the link and re-create it to point to the correct gcov or you can setup something like update-alternatives to change the version of gcov when you change the default compiler.
The previous answer should work as well if you have a binary called gcov in /usr/local/gcc8/bin, because if you add that path, into your environment PATH first, it will be selected first.
I'm trying to write (what I thought would be) a simple bash script that will:
run virtualenv to create a new environment at $1
activate the virtual environment
do some more stuff (install django, add django-admin.py to the virtualenv's path, etc.)
Step 1 works quite well, but I can't seem to activate the virtualenv. For those not familiar with virtualenv, it creates an activate file that activates the virtual environment. From the CLI, you run it using source
source $env_name/bin/activate
Where $env_name, obviously, is the name of the dir that the virtual env is installed in.
In my script, after creating the virtual environment, I store the path to the activate script like this:
activate="`pwd`/$ENV_NAME/bin/activate"
But when I call source "$activate", I get this:
/home/clawlor/bin/scripts/djangoenv: 20: source: not found
I know that $activate contains the correct path to the activate script, in fact I even test that a file is there before I call source. But source itself can't seem to find it. I've also tried running all of the steps manually in the CLI, where everything works fine.
In my research I found this script, which is similar to what I want but is also doing a lot of other things that I don't need, like storing all of the virtual environments in a ~/.virtualenv directory (or whatever is in $WORKON_HOME). But it seems to me that he is creating the path to activate, and calling source "$activate" in basically the same way I am.
Here is the script in its entirety:
#!/bin/sh
PYTHON_PATH=~/bin/python-2.6.1/bin/python
if [ $# = 1 ]
then
ENV_NAME="$1"
virtualenv -p $PYTHON_PATH --no-site-packages $ENV_NAME
activate="`pwd`/$ENV_NAME/bin/activate"
if [ ! -f "$activate" ]
then
echo "ERROR: activate not found at $activate"
return 1
fi
source "$activate"
else
echo 'Usage: djangoenv ENV_NAME'
fi
DISCLAIMER: My bash script-fu is pretty weak. I'm fairly comfortable at the CLI, but there may well be some extremely stupid reason this isn't working.
If you're writing a bash script, call it by name:
#!/bin/bash
/bin/sh is not guaranteed to be bash. This caused a ton of broken scripts in Ubuntu some years ago (IIRC).
The source builtin works just fine in bash; but you might as well just use dot like Norman suggested.
In the POSIX standard, which /bin/sh is supposed to respect, the command is . (a single dot), not source. The source command is a csh-ism that has been pulled into bash.
Try
. $env_name/bin/activate
Or if you must have non-POSIX bash-isms in your code, use #!/bin/bash.
In Ubuntu if you execute the script with sh scriptname.sh you get this problem.
Try executing the script with ./scriptname.sh instead.
best to add the full path of the file you intend to source.
eg
source ./.env instead of source .env
or source /var/www/html/site1/.env
I tried to edit my bash_profile earlier. I think I put a space after the '=' and then I couldn't use any command line tools. I've now managed to get them back, although my terminal now says I don't have rails installed. I sudo install it, but it fails because it asks me to replace the rake gem with the rake executable. I say no to that request. I have been using rails to follow a tutorial. Unless it has been wiped, I have it. There must be something wrong with the path, but I don't know what the bash_profile should be. It is currently:
PATH=/usr/local/rvm/bin:$PATH
PATH=/Users/me/.rvm/gems/ruby-2.0.0-p247/bin
PATH=Users/me/.rvm/gems/ruby-1.9.3-p448/bin
PATH="/Applications/Postgres.app/Contents/MacOS/bin:$PATH"
"$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" # Load RVM into $
export PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
I don't know how to go about fixing this.Thanks in advance for any help you might be able to offer.
These two lines replace the entire PATH with a single directory:
PATH=/Users/me/.rvm/gems/ruby-2.0.0-p247/bin
PATH=Users/me/.rvm/gems/ruby-1.9.3-p448/bin
There is now absolutely nothing in your command search path except "Users/me/.rvm/gems/ruby-1.9.3-p448/bin", with missing / so that it only works if you're in the root directory, no less.
Then you add some stuff to the PATH without replacing what's there, which is fine, but then you do this:
export PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
Which completely undoes all of that and gives you just the above literal path.
You generally don't want to assign to PATH without a $PATH somewhere on the right hand side.
In my Rails controller, I take a URL that the user inputs and runs the system command wget:
system("wget #{url}")
I'm afraid that the user might put in something like www.google.com && rm -rf ., which would make the controller execute the command
system("wget www.google.com && rm -rf .")
which deletes everything. How should I prevent against this kind of attacks? I'm not sure what other things the user could put in to harm my system.
Per this thread:
You can avoid shell expansion by passing arguments to the script individually:
system("/bin/wget", params[:url])
Per the documentation on Kernel#system this form does not invoke a shell. Constructs like && are shell constructs, so if you use this form, then the param will be passed to /bin/wget literally as an argument.
That said, still be suspicious of input, sanitize where possible, and if feasible, run it as a non-privileged (or better yet, jailed) user.
Joining commands together with && (or ;, or |) is a shell feature, not something that wget itself understands. If you're using a function that passes a command line to a shell (such as the system() function in many languages), you're at risk. If you execute the wget program directly (rather than executing a shell and giving it a command line), you won't be at risk of that particular attack.
However, the attacker could still do other things, like abuse wget's -O option to overwrite files. You'd be better off not using wget at all — if your goal is to download a file, why not just use an HTTP library to do it directly in your own program's code?
If all you want to do is to just retrieve the content of the URL, it is better to completely avoid the use of 'system' or any other means of running a command on the server.
You can use an http client such as Httparty to fetch the URL content.
response = HTTParty.get("#{url}")
I want to replace some utilities(like telnet) with transparent wrappers(with loggers).
At first I used aliases, that worked nicely at the command line but gnome doesn't understand shell aliases so that when people would launch the utilities as the shell for a gnome-terminal profile it would run the actual utility instead of the wrapper.
Then I tried adding new folder with symlinks and prepended it to PATH(security isn't a huge concern plus its a special folder just for these symlinks) in ~/.bashrc but they still run the original(I'm guessing gnome doesn't run .bashrc since it works from the command line). Any ideas where setting path needs to go?
Maybe update-alternatives fits your needs?
I found two ways to do this that seem to work like I want(sourcing scripts for gnome env).
First putting it in ${HOME}/.gnomerc (but I found some places that say you should manually exec gnome session afterwards and others that don't(It seems to work fine without it) and I'm afraid of breaking login.)
Putting it in ~/.profile seems to work so I just
echo 'PATH=~/.symlink_dir/:${PATH}' > ~/.profile
(note that this is ignored by bash if a ~/.bash_profile exists so you may want to manually source it from ~/.bash_profile just in case
echo 'source ~/.profile' >> ~/.bash_profile).
If you really want to use your replacement utilities throughout, you could put symlinks to your replacements in /usr/bin/ (or wherever as appropriate) and move the originals to /usr/bin/originals/ (or wherever).
If you do that, you'd better make sure that your wrappers are rock solid though. Depending on what you're replacing, errors might prevent booting, which is generally undesirable.
It might not be what you are asking, but have you tried changing the commands of the launchers from the menu editor.
If you are using Gnome 3 you will have to download the alacarte package.