`nsenter` + specifying a user needs environment variable assignment - environment-variables

I'm running a command in a network namespace using nsenter, and I wish to run it as an ordinary (non-root) user because I want to access an Android SDK installation, which exists in my own home directory.
I find that although I can specify which user I want in my nsenter command, my environment variables don't get set accordingly, and I don't see a way to set those variables. What can I do?
sudo nsenter --net=/var/run/netns/netns1 -S 1000 bash -c -l whoami
# => bash: /root/.bashrc: Permission denied
# => myuser
sudo nsenter --net=/var/run/netns/netns1 -S 1000 bash -c 'echo $HOME'
# => /root
Observe that:
When I attempt a login shell (with -l), bash attempts to source /root/.bashrc instead of /home/myuser/.bashrc
$HOME is /root
If I prepend my command with a variable assignment (HOME=/home/markham sudo nsenter --net=/var/run/netns/netns1 -S 1000 bash -c -l whoami), I get the same results.
(I'm on version nsenter from util-linux 2.34.)

Related

Docker - Supervisord container with Nginx (sudo user)

I have created a base image with supervisord installed.
Summary of steps:
FROM ubuntu:20.04
Then I installed some base utilities (time zone/nano/sudo/zip etc)
FROM current_timezone/base-utils:1.04
Then I created a base supervisord image including a user with sudo privileges and password.
RUN apt-get update \
&& groupadd ${DOCKER_CONTAINER_WEBGROUP} -f \
&& useradd -m -s $(which bash) -G sudo ${DOCKER_CONTAINER_USERNAME} \
&& echo "${DOCKER_CONTAINER_USERNAME}:${DOCKER_CONTAINER_PASSWORD}" | chpasswd \
&& usermod -aG www-data ${DOCKER_CONTAINER_USERNAME}
So in any Docker image deriving from this I can run supervisor :
USER ${DOCKER_CONTAINER_USERNAME}
CMD ["/usr/bin/supervisord"]`
So, I have Dockerfile entries for my images deriving from this image :
Apache
Nginx
Varnish
etc
Most of the applications can launch with supervisord like this:
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
autorestart=false
startretries=0
But Nginx doesn't launch, the error:
the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
So I created this and thought I would get an input prompt once the container starts: (the objective was to receive input prompt when container starts so that password can be sent to sudo -S to start Nginx)
[program:nginx]
command=sudo -K && read -s -p "Nginx requires a super-privileges (sudo user) to start - Please enter password for your sudo user: " TMP_PW && echo $TMP_PW | sudo -S service nginx start && unset TMP_PW
user=userdefinedinstagesupwards
Running that command above in command-line once I am inside the container already (docker exec -ti container_nginx bash) works, and I can input password from command-line.
The Issues
Nginx does not start automatically, and I have to enter container to start Nginx manually.
NOTE: I have seen the docker nginx image
docker run -d -v $PWD/nginx.conf:/etc/nginx/nginx.conf nginx but this only has Nginx - I have some tools I would like to reuse (as explained above I created an image that has those installed) which means I would have to recreate the steps backwards just for Nginx.
Additional information
As requested below by users, the reasoning why I am using supervisord like this is because I run multiple scripts (debug info/dynamic paths/secrets) and the main application (eg. Apache/Nginx/Varnish) etc alongside.
A simple example: Apache web-server with two files (tried to make a brief example):
When supervisord initializes (CMD ["/usr/bin/supervisord"]) the main application starts, and the helper scripts (in this example some environment variables built from parent images). I can then access all output in /var/log/supervisor/app-stdout(or stderr)* as required.
For instance: I then have information on ${INSTALLED_BASE_APPS_TEXT} available which tells me which apps my base-utils are installed. If I ever see I need to add another tool, for argument here let's say htop, I can go and update the parent image and rebuild this child stage later. Some tools I would always like to be available regardless of which container is running - nano,zip etc are things permanently used by me.
supervisor/conf.d/config-webserver.conf
[supervisord]
nodaemon=true
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
autorestart=false
startretries=0
supervisor/conf.d/config-information.conf
[program:echo]
command=/bin/bash -c "echo Loaded Supervisord program 'echo' - Stage 5 operation \(Custom Nginx supervisord config\)"
autorestart=false
startretries=1
[program:echo_base_utils]
command=/bin/bash -c "echo ${INSTALLED_BASE_APPS_TEXT}"
autorestart=false
startretries=0
[program:echo_test_item]
command=/bin/bash -c "echo ${ENV_TEST_ITEM}"
autorestart=false
startretries=0
QUESTION
Is there any way that supervisord commands can be made so that they prompt for input as soon as container starts? I would like to keep using the images described above.

Docker: executing all commands as local user and not root

How do I run docker run and docker-compose up/run commands so that the process inside the docker is run by a user with the same uuid as my local user?
I need to do this so that any files generated by an "inside-docker" process would have ownership permissions of my local user.
To replicate:
Use the alpine:3.9 container, mount in a volume for the file to be written and create the file. Assume my current username is user.
mkdir output_dir #Create an output directory
docker run -it --rm --volume "/path/to/output_dir:/tmp" alpine:3.9 touch /tmp/file.txt
ls -la output_dir/file.txt
Will give the output:
-rw-r--r-- 1 root root 0 Feb 7 19:51 /path/to/output_dir/file.txt
This means I need to sudo chown user:user /path/to/output_dir/file.txt to have access as my current user on my own file system.
How do I do this without this extra step?
Idea that comes to mind:
Add a Docker Entrypoint which will create a user inside the container with the same uuid as my local user and execute any code as that user.
docker-entrypoint.sh
#!/bin/sh
TEMP_UID="${TEMP_UID:-1000}"
set -ux
useradd -s /bin/false --no-create-home -u ${TEMP_UID} temp
#su-exec is an executable which makes it easy to run a process as a specific user.
exec su-exec temp $#
The problem with this is I will have to inject the TEMP_UID=<user_id> as an environment variable at every docker run command or include in my docker-compose.yml file for every docker-compose up/run command. If Docker has an internal variable that keeps track of the uuid of the user that ran it, I would just use that. But I can't seem to find such an internal variable.
Any help would be greatly appreciated!
I think the answer is as simple as
docker run --user ${UID} -it --rm --volume "/path/to/output_dir:/tmp" alpine:3.9 touch /tmp/file.txt
Note I injected --user ${UID} into your example command.
Many of the current options require a change outside of the container to pass in the current user, or rely on variables that may not exist in all environments. My preferred solution, since the goal is to fix file permissions on mounted volumes, is to start the entrypoint as root with a script that changes the container userid to match that of the volume mount's userid. And then the end of the entrypoint launches the application with a exec gosu $app_user_name "$#" to switch from root to that application user that was modified inside of the container.
Scripts to do this are in my base image repo. Take note of the fix-perms script which includes two sections like the following (one for uid and another for gid):
# update the uid
if [ -n "$opt_u" ]; then
OLD_UID=$(getent passwd "${opt_u}" | cut -f3 -d:)
NEW_UID=$(stat -c "%u" "$1")
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
The OLD_UID value is from the userid in the image, and NEW_UID is from the volume mount. When those don't match, the usermod command is run, followed by a recursive chown command to fix any files with the old uid/gid.
Note that in production, where user id's on the host can be standardized, I match the host user id to that of the image if a volume is needed, allowing me to run the entrypoint as that user instead of root. The entrypoint checks the current userid and skips the fix-perms script and gosu command if it is not root.

How to get /etc/profile to run automatically in Alpine / Docker

How can I get /etc/profile to run automatically when starting an Alpine Docker container interactively? I have added some aliases to an aliases.sh file and placed it in /etc/profile.d, but when I start the container using docker run -it [my_container] sh, my aliases aren't active. I have to manually type . /etc/profile from the command line each time.
Is there some other configuration necessary to get /etc/profile to run at login? I've also had problems with using a ~/.profile file. Any insight is appreciated!
EDIT:
Based on VonC's answer, I pulled and ran his example ruby container. Here is what I got:
$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42
/ # more /etc/profile.d/rubygems.sh
export PATH=$PATH:/usr/lib/ruby/gems/2.0.0/bin
/ # env
no_proxy=*.local, 169.254/16
HOSTNAME=6c7e93ebc5a1
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
/ # exit
Although the /etc/profile.d/rubygems.sh file exists, it is not being run when I login and my PATH environment variable is not being updated. Am I using the wrong docker run command? Is something else missing? Has anyone gotten ~/.profile or /etc/profile.d/ files to work with Alpine on Docker? Thanks!
The default shell in Alpine Linux is ash.
Ash will only read the /etc/profile and ~/.profile files if it is started as a login shell sh -l.
To force Ash to source the /etc/profile or any other script you want upon its invocation as a non login shell, you need to setup an environment variable called ENV before launching Ash.
e.g. in your Dockerfile
FROM alpine:3.5
ENV ENV="/root/.ashrc"
RUN echo "echo 'Hello, world!'" > "$ENV"
When you build that you get:
deployer#ubuntu-1604-amd64:~/blah$ docker build --tag test .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM alpine:3.5
3.5: Pulling from library/alpine
627beaf3eaaf: Pull complete
Digest: sha256:58e1a1bb75db1b5a24a462dd5e2915277ea06438c3f105138f97eb53149673c4
Status: Downloaded newer image for alpine:3.5
---> 4a415e366388
Step 2/3 : ENV ENV "/root/.ashrc"
---> Running in a9b6ff7303c2
---> 8d4af0b7839d
Removing intermediate container a9b6ff7303c2
Step 3/3 : RUN echo "echo 'Hello, world!'" > "$ENV"
---> Running in 57c2fd3353f3
---> 2cee6e034546
Removing intermediate container 57c2fd3353f3
Successfully built 2cee6e034546
Finally, when you run the newly generated container, you get:
deployer#ubuntu-1604-amd64:~/blah$ docker run -ti test /bin/sh
Hello, world!
/ # exit
Notice the Ash shell didn't run as a login shell.
So to answer your query, replace
ENV ENV="/root/.ashrc"
with:
ENV ENV="/etc/profile"
and Alpine Linux's Ash shell will automatically source the /etc/profile script each time the shell is launched.
Gotcha: /etc/profile is normally meant to only be sourced once! So, I would advise that you don't source it and instead source a /root/.somercfile instead.
Source: https://stackoverflow.com/a/40538356
You still can try in your Dockerfile a:
RUN echo '\
. /etc/profile ; \
' >> /root/.profile
(assuming the current user is root. If not, replace /root with the full home path)
That being said, those /etc/profile.d/xx.sh should run.
See codeclimate/docker-alpine-ruby as an example:
COPY files /
With 'files/etc" including an files/etc/profile.d/rubygems.sh running just fine.
In the OP project Dockerfile, there is a
COPY aliases.sh /etc/profile.d/
But the default shell is not a login shell (sh -l), which means profile files (or those in /etc/profile.d) are not sourced.
Adding sh -l would work:
docker#default:~$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42 sh -l
87a58e26b744:/# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/ruby/gems/2.0.0/bin
As mentioned by Jinesh before, the default shell in Alpine Linux is ash
localhost:~$ echo $SHELL
/bin/ash
localhost:~$
Therefore simple solution is too add your aliases in .profile. In this case, I put all my aliases in ~/.ash_aliases
localhost:~$ cat .profile
# ~/.profile
# Alias
if [ -f ~/.ash_aliases ]; then
. ~/.ash_aliases
fi
localhost:~$
.ash_aliases file
localhost:~$ cat .ash_aliases
alias a=alias
alias c=clear
alias f=file
alias g=grep
alias l='ls -lh'
localhost:~$
And it works :)
I use this:
docker exec -it my_container /bin/ash '-l'
The -l flag passed to ash will make it behave as a login shell, thus reading ~/.profile

su - $USER -p -c "$CMD" not accessing path

I am a former Windows guy and I am having trouble with Unix shell.
su - $USER -p -c "$CMD" this command like this one has to access path variables of the given environment but it does not. When I change it to su - $USER -p -c "export PATH=$PATH; $CMD", it works as expected. (I guess).
I am trying this code in an init script and I have another question here related to this one. (Sorry for duplication, but I am sure where is the correct place to ask.)
First question is why su - $USER -c $CMD forgets all previously defined env variables?
Is it a correct approach to insert path inside the command like su - $USER -p -c "export PATH=$PATH; $CMD"
Edit
su $USER -p -c "whoami && echo $PATH && $CMD", I tried removing -. Still not working.
When I experiment with the following command su - $USER -p -c "whoami && echo $PATH && $CMD" I can see that $user and $path are set correctly. But it still cannot find binaries under the $PATH.
Edit-2
I made a few more experiments and I have come to shortest working form: su $USER -c "PATH=$PATH; $CMD". I am still not sure if this is the best practise?
su - means switch user and load the new user's environment (similar to what's loaded when you log in as the user to begin with). Try doing su instead without the -. This switches user but keeps the environment how it was before you swapped the user.
Well su - means use a login shell. So it is taking on the env of the user you are su'ing to. If you want to keep your env omit the -
man su:
The value of $PATH is reset to /bin:/usr/bin for normal users...

How to set PS1 in Docker Container

I want to set $PS1 environment variable to the container. It helps me to identify multilevel or complex docker environment setup. Currently docker container prompts with:
root#container-id#
If I can change it as following , I can identify the container by looking at the $PS1 prompt itself.
[Level-1]root#container-id#
I did experiments by exporting $PS1 by making my own image (Dockerfile), .profile file etc. But it's not reflecting.
I had the same problem but in docker-compose context.
Here is how I managed to make it work:
# docker-compose.yml
version: '3'
services:
my_service:
image: my/image
environment:
- "PS1=$$(whoami):$$(pwd) $$ "
Just pass PS1 value as an environment variable in docker-compose.yml configuration file.
Notice how dollars signs need to be escaped to prevent docker-compose from interpolating values (documentation).
This Dockerfile sets PS1 by doing:
RUN echo 'export PS1="[\u#docker] \W # "' >> /root/.bash_profile
We use a similar technique for tracking inputs and outputs in complex container builds.
https://github.com/ianmiell/shutit/blob/master/shutit_global.py#L1338
This line represents the product of hard-won experience dealing with docker/(p)expect combinations:
"SHUTIT_BACKUP_PS1_%s=$PS1 && PS1='%s' && unset PROMPT_COMMAND"
Backing up the prompt is handy if you want to revert, setting the PS1 with PS1= sets the PS1, and unsetting the PROMPT_COMMAND removes any nasty surprises with the terminal being reset etc.. for the expect.
If the question is about how to ensure it's set when you run the container up (as opposed to building), then you may need to add something to your .bashrc / .profile files depending on how you run up your container. As far as I know there's no way to ensure it with a dockerfile directive and make it persist.
I normally create /home/USER/.bashrc or /root/.bashrc, depending on who the USER of the Dockerfile is. That works well. I've tried
ENV PS1 '# '
but that never worked for me.
Here's a way to set the PS1 when you run the container:
docker run -it \
python:latest \
bash -c "echo \"export PS1='[python:latest] \w$ '\" >> ~/.bashrc && bash"
I made a little wrapper script, to be able to run any image with my custom prompt:
#!/usr/bin/env bash
# ~/bin/docker-run
set -eu
image=$1
docker run -it \
-v $(pwd):/opt/app
-w /opt/app ${image} \
bash -c "echo \"export PS1='[${image}] \w$ '\" >> ~/.bashrc && bash"
In debian 9, for running bash, this worked:
RUN echo 'export PS1="[\$ENV_VAR] \W # "' >> /root/.bashrc
It's generally running as root and I generally know I am in docker, so I wanted to have a prompt that indicated what the container was, so I used an environment variable. And I guess the bash I use loads .bashrc preferentially.
Try setting environment variables using docker options
Example:
docker run \
-ti \
--rm \
--name ansibleserver-debug \
-w /githome/axel-ansible/ \
-v /home/lordjea/githome/:/githome/ \
-e "PS1=DEBUG$(pwd)# " \
lordjea/priv:311 bash
docker --help
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Run a command in a new container
Options:
...
-e, --env list Set environment variables
...
You should set that in .profile, not .bashrc.
Just open .profile from your root or home and replace PS1='\u#\h:\w\$ ' with PS1='\e[33;1m\u#\h: \e[31m\W\e[0m\$ ' or whatever you want.
Note that you need to restart your container.
On my MAC I have an alias named lxsh that will start a bash shell using the ubuntu image in my current directory (details). To make the shell's prompt change, I mounted a host file onto /root/.bash_aliases. It's a dirty hack, but it works. The full alias:
alias lxsh='echo "export PS1=\"lxsh[\[$(tput bold)\]\t\[$(tput sgr0)\]\w]\\$\[$(tput sgr0)\] \"" > $TMPDIR/a5ad217e-0f2b-471e-a9f0-a49c4ae73668 && docker run --rm --name lxsh -v $TMPDIR/a5ad217e-0f2b-471e-a9f0-a49c4ae73668:/root/.bash_aliases -v $PWD:$PWD -w $PWD -it ubuntu'
The below solution assumes that you've used Dockerfile USER to set a non-root Linux user for Bash.
What you might have tried without success:
ENV PS1='[docker]$' ## may not work
Using ENV to set PS1 can fail because the value can be overridden by default settings in a preexisting .bashrc when an interactive shell is started. Some Linux distributions are opinionated about PS1 and set it in an initial .bashrc for each user (Ubuntu does this, for example).
The fix is to modify the Dockerfile to set the desired value at the end of the user's .bashrc -- overriding any earlier settings in the script.
FROM ubuntu:20.04
# ...
USER myuser ## the username
RUN echo "PS1='\n[ \u#docker \w ]\n$ '" >>.bashrc

Resources