su - $USER -p -c "$CMD" not accessing path - ruby-on-rails

I am a former Windows guy and I am having trouble with Unix shell.
su - $USER -p -c "$CMD" this command like this one has to access path variables of the given environment but it does not. When I change it to su - $USER -p -c "export PATH=$PATH; $CMD", it works as expected. (I guess).
I am trying this code in an init script and I have another question here related to this one. (Sorry for duplication, but I am sure where is the correct place to ask.)
First question is why su - $USER -c $CMD forgets all previously defined env variables?
Is it a correct approach to insert path inside the command like su - $USER -p -c "export PATH=$PATH; $CMD"
Edit
su $USER -p -c "whoami && echo $PATH && $CMD", I tried removing -. Still not working.
When I experiment with the following command su - $USER -p -c "whoami && echo $PATH && $CMD" I can see that $user and $path are set correctly. But it still cannot find binaries under the $PATH.
Edit-2
I made a few more experiments and I have come to shortest working form: su $USER -c "PATH=$PATH; $CMD". I am still not sure if this is the best practise?

su - means switch user and load the new user's environment (similar to what's loaded when you log in as the user to begin with). Try doing su instead without the -. This switches user but keeps the environment how it was before you swapped the user.

Well su - means use a login shell. So it is taking on the env of the user you are su'ing to. If you want to keep your env omit the -

man su:
The value of $PATH is reset to /bin:/usr/bin for normal users...

Related

`nsenter` + specifying a user needs environment variable assignment

I'm running a command in a network namespace using nsenter, and I wish to run it as an ordinary (non-root) user because I want to access an Android SDK installation, which exists in my own home directory.
I find that although I can specify which user I want in my nsenter command, my environment variables don't get set accordingly, and I don't see a way to set those variables. What can I do?
sudo nsenter --net=/var/run/netns/netns1 -S 1000 bash -c -l whoami
# => bash: /root/.bashrc: Permission denied
# => myuser
sudo nsenter --net=/var/run/netns/netns1 -S 1000 bash -c 'echo $HOME'
# => /root
Observe that:
When I attempt a login shell (with -l), bash attempts to source /root/.bashrc instead of /home/myuser/.bashrc
$HOME is /root
If I prepend my command with a variable assignment (HOME=/home/markham sudo nsenter --net=/var/run/netns/netns1 -S 1000 bash -c -l whoami), I get the same results.
(I'm on version nsenter from util-linux 2.34.)

Inside sudo, environment variables are not set when command is inline versus calling a second script

I am running a korn shell script as root and need to sudo to a different user (oracle) to execute some commands and I need access to the environment variables set in .profile.
I am seeing different behavior if I call a 2nd script to execute the commands versus executing them inline.
Here is a simple test to demonstrate. This works and displays the $ORACLE_HOME environment variable:
sudo su - oracle -s /bin/ksh -c "/home/u6vzbes/upgrade/get_oracle_home_test.ksh"
Where the called script is just this:
#!/bin/ksh
echo 'Called from script - ORACLE_HOME is ' ${ORACLE_HOME}
But this DOES NOT work, with the $ORACLE_HOME environment variable being blank:
sudo su - oracle -s /bin/ksh -c "
echo 'Called from sudo - Oracle home is ${ORACLE_HOME}'
"
Why do these two work differently? I would prefer to execute commands inline rather than have a second script as I will need to sudo to oracle multiple times throughout the root script. FYI, the environment variable is set in the .profile of the oracle user.
The ' escape doesn't work in this case. The following works, escaping the $ character:
sudo su - oracle -s /bin/ksh -c "echo Called from sudo - Oracle home is \${ORACLE_HOME}"
So remove the ' and place \ in front of $

ruby unicorn as service in docker - uses wrong rake

my problem is that i can start unicorn as a service in docker, though it works just fine if i start it from command line.
trying to build ruby with unicorn and nginx web server docker image.
using as a base FROM ruby:2.3 image. but with latest ubuntu saw same troubles.
this article explains pretty straight forward how to use unicorn with nginx.
everything seems to be working if i start it from bash like this
(cd /app && bundle exec unicorn -c /app/config/unicorn.rb -E uat -D)
but i see errors if start it s as service
service unicorn_appname start
the error is:
bundler: command not found: unicorn
after i did some investigation i've realized that the issue is most probably in env variables because service essentially tries to execute my command with su - root -c prefix:
su - root -c " cd /app && bundle exec unicorn -c config/unicorn.rb -E uat -D"
this command produces same error.
though i am logged in as root in my bash as well
after googling for a while i found partial solution - set PATH env variable like this:
su - root -c "PATH=\"$(ruby -e 'print Gem.default_dir')/bin:$PATH\" && cd /app && bundle exec unicorn -c config/unicorn.rb -E uat -D"
but now i see Could not find rake-12.0.0 in any of the sources.
and rake --version returns rake, version 12.0.0. Meanwhile su - root -c "rake --version" returns rake, version 10.4.2
which rake returns /usr/local/bundle/bin/rake, meanwhile su - root -c "which rake" returns /usr/local/bin/rake
so my guess is that service tries to use wrong path for rake.
how do i change default rake path? or any other suggestion where to dig into?
---------------- UPDATE - kinda solution ---------------------
i think i found the reason of all my issues with bundler in docker. looks like all env variables for bundler are set in shell startup part. thus they are not there if i run it as sudo su - appuser -c "...cmd..."
so i've tested it by running printenv right in bash. and another one like this sudo su - appuser -c "printenv". - found big difference.
since i was building docker i've set them through docker file, but it also works if just export them.
ENV PATH=/usr/local/bundle/bin:/usr/local/bundle/gems/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV RUBYGEMS_VERSION=3.0.3
ENV RUBY_VERSION=2.3.8
ENV GEM_HOME=/usr/local/bundle
ENV BUNDLE_PATH=/usr/local/bundle
ENV BUNDLE_SILENCE_ROOT_WARNING=1
ENV RUBY_MAJOR=2.3
ENV BUNDLE_APP_CONFIG=/usr/local/bundle
i also did
RUN bundle config app_config /usr/local/bundle && bundle config path /usr/local/bundle
and since the right way is to not use root for web app i rebuild everything in docker file so it creates and uses separate user (but this part i guess is optional):
RUN adduser --disabled-password --gecos "" appuser
....
#installing sudo
RUN apt-get update
RUN apt-get install -y sudo
....
# gives sudo to new user
RUN echo "appuser ALL=(root) NOPASSWD:ALL" > /etc/sudoers.d/appuser && chmod 0440 /etc/sudoers.d/appuser
....
#don't forget to give rights to your folder for new user like this:
#RUN sudo chown -R appuser:appuser /usr/local
....
#utilize new user
USER appuser
#bundle install and rest stuff is here
....
hope my update save someone time

How can I change the group of a file when executing in a Travis CI build?

I've got a Python Travis CI build and a Python unit test executes attempts to change the group of a file on the filesystem. The file was previously created by the unit test, so the user executing the test owns the file.
I'm able to start a sub-shell in which I can run chgrp commands (per the Travis guidelines), but unfortunately, this screws up the virtualenv set up for my specific Python version (and who knows what else).
How to replicate (in Travis CI script):
language: python
sudo: true
python:
- "3.4"
- "3.5"
before_install:
- sudo apt-get -qq update
- sudo gpasswd -a $USER fuse
script:
- touch testfile
- chgrp fuse testfile | echo 0 # this does not work - bad
- sudo -E su $USER -c "chgrp fuse testfile" # the sudo / su wrapper is required per Travis instructions, see link above - good
- python --version # reports 3.4 or 3.5 - good
- sudo -E su $USER -c "python --version" # always reports 2.7 - bad
- sudo -E su $USER -c "python --version" # always reports 3.2 - bad
As I've commented in the block above, running a command which attempts to change the group of the testfile (which is what my unit test code is doing) only works when wrapped with sudo -E su $USER -c.
Unfortunately, when I do this, I lose the ability to access python 3.4 and 3.5 in those script phases (which I've specified above) in the virtualenv that Travis has set up for me.
Any idea how I can achieve both of my goals? (allowing chgrp from the travis non-root user while simultaneously not mucking with the virtualenv or the python on the path?
When you create a new group, you have to log out and log in again to be able to use chgrp.
Using sudo is a way around this behavior. Since you're already using it for groupadd and usermod, I suggest changing the last line to sudo chgrp newtravisgroup newfile.
You can also use su to create a new login shell where newtravisgroup will be available but using sudo as mentioned above is the simplest way.
Edit:
When you use su PATH is reset. That's the reason python reverts back to the system python. You can activate the virtualenv again before running your test.
sudo -E su $USER -c "source $VIRTUAL_ENV/bin/activate; python --version"

How to set PS1 in Docker Container

I want to set $PS1 environment variable to the container. It helps me to identify multilevel or complex docker environment setup. Currently docker container prompts with:
root#container-id#
If I can change it as following , I can identify the container by looking at the $PS1 prompt itself.
[Level-1]root#container-id#
I did experiments by exporting $PS1 by making my own image (Dockerfile), .profile file etc. But it's not reflecting.
I had the same problem but in docker-compose context.
Here is how I managed to make it work:
# docker-compose.yml
version: '3'
services:
my_service:
image: my/image
environment:
- "PS1=$$(whoami):$$(pwd) $$ "
Just pass PS1 value as an environment variable in docker-compose.yml configuration file.
Notice how dollars signs need to be escaped to prevent docker-compose from interpolating values (documentation).
This Dockerfile sets PS1 by doing:
RUN echo 'export PS1="[\u#docker] \W # "' >> /root/.bash_profile
We use a similar technique for tracking inputs and outputs in complex container builds.
https://github.com/ianmiell/shutit/blob/master/shutit_global.py#L1338
This line represents the product of hard-won experience dealing with docker/(p)expect combinations:
"SHUTIT_BACKUP_PS1_%s=$PS1 && PS1='%s' && unset PROMPT_COMMAND"
Backing up the prompt is handy if you want to revert, setting the PS1 with PS1= sets the PS1, and unsetting the PROMPT_COMMAND removes any nasty surprises with the terminal being reset etc.. for the expect.
If the question is about how to ensure it's set when you run the container up (as opposed to building), then you may need to add something to your .bashrc / .profile files depending on how you run up your container. As far as I know there's no way to ensure it with a dockerfile directive and make it persist.
I normally create /home/USER/.bashrc or /root/.bashrc, depending on who the USER of the Dockerfile is. That works well. I've tried
ENV PS1 '# '
but that never worked for me.
Here's a way to set the PS1 when you run the container:
docker run -it \
python:latest \
bash -c "echo \"export PS1='[python:latest] \w$ '\" >> ~/.bashrc && bash"
I made a little wrapper script, to be able to run any image with my custom prompt:
#!/usr/bin/env bash
# ~/bin/docker-run
set -eu
image=$1
docker run -it \
-v $(pwd):/opt/app
-w /opt/app ${image} \
bash -c "echo \"export PS1='[${image}] \w$ '\" >> ~/.bashrc && bash"
In debian 9, for running bash, this worked:
RUN echo 'export PS1="[\$ENV_VAR] \W # "' >> /root/.bashrc
It's generally running as root and I generally know I am in docker, so I wanted to have a prompt that indicated what the container was, so I used an environment variable. And I guess the bash I use loads .bashrc preferentially.
Try setting environment variables using docker options
Example:
docker run \
-ti \
--rm \
--name ansibleserver-debug \
-w /githome/axel-ansible/ \
-v /home/lordjea/githome/:/githome/ \
-e "PS1=DEBUG$(pwd)# " \
lordjea/priv:311 bash
docker --help
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Run a command in a new container
Options:
...
-e, --env list Set environment variables
...
You should set that in .profile, not .bashrc.
Just open .profile from your root or home and replace PS1='\u#\h:\w\$ ' with PS1='\e[33;1m\u#\h: \e[31m\W\e[0m\$ ' or whatever you want.
Note that you need to restart your container.
On my MAC I have an alias named lxsh that will start a bash shell using the ubuntu image in my current directory (details). To make the shell's prompt change, I mounted a host file onto /root/.bash_aliases. It's a dirty hack, but it works. The full alias:
alias lxsh='echo "export PS1=\"lxsh[\[$(tput bold)\]\t\[$(tput sgr0)\]\w]\\$\[$(tput sgr0)\] \"" > $TMPDIR/a5ad217e-0f2b-471e-a9f0-a49c4ae73668 && docker run --rm --name lxsh -v $TMPDIR/a5ad217e-0f2b-471e-a9f0-a49c4ae73668:/root/.bash_aliases -v $PWD:$PWD -w $PWD -it ubuntu'
The below solution assumes that you've used Dockerfile USER to set a non-root Linux user for Bash.
What you might have tried without success:
ENV PS1='[docker]$' ## may not work
Using ENV to set PS1 can fail because the value can be overridden by default settings in a preexisting .bashrc when an interactive shell is started. Some Linux distributions are opinionated about PS1 and set it in an initial .bashrc for each user (Ubuntu does this, for example).
The fix is to modify the Dockerfile to set the desired value at the end of the user's .bashrc -- overriding any earlier settings in the script.
FROM ubuntu:20.04
# ...
USER myuser ## the username
RUN echo "PS1='\n[ \u#docker \w ]\n$ '" >>.bashrc

Resources