I have an Alpine docker container and depending on how I connect using ssh the path is different. If I connect using a PTY shell:
ssh root#localhost sh -lc env | grep PATH
this prints:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
However if don't use this shell:
ssh root#localhost sh -c env | grep PATH
this prints:
PATH=/bin:/usr/bin:/sbin:/usr/sbin
Why is this happening? What do I need to do so that the second command produces the same output as the first command?
With sh -l you start a login shell:
When invoked as an interactive login shell, or a non-interactive shell with the --login option, it first attempts to read and execute commands from /etc/profile and ~/.profile, in that order. The --noprofile option may be used to inhibit this behavior.
...
A non-interactive shell invoked with the name sh does not attempt to read any other startup files.
From https://linux.die.net/man/1/sh
That is you can probably edit the profile files to make the login shell behave similar to noprofile but it might become difficult the other way around.
I'll answer my own question. This stack overflow post has the main info needed: Where to set system default environment variables in Alpine linux?
Given that, there are two alternatives:
Declare PATH using the ENV option of the Dockerfile
Or add PermitUserEnvironment yes to sshd_config file and define PATH in ~/.ssh/environment
Related
I'm creating a Dockerfile that needs to execute a command, let's call it foo
In order to execute foo, I need to create a .cfc in current directory with token information to call this foo service.
So basically I should do something like
ENV FOO_TOKEN token
ENV FOO_HOST host
ENV FOO_SHARED_DIRECTORY directory
ENV LIBS_TARGET target
and then put the first three variables in a .cfg file and then launch a command using the last variable as target.
Given that if run more than one CMD in a Dockerfile, only the last one will be considered, how should I do that?
My ideal execution is docker run -e "FOO_TOKEN=aaaaaaa" -e "FOO_HOST=myhost" -e "FOO_SHARED_DIRECTORY=Shared" -e "LIBS_TARGET=target/scala-2.11/*.jar" -it --rm --name my-ci-deploy foo/foo:latest
If you wanted to keep everything in the Dockerfile (something I think is rather desirable), you can do something nasty like:
ENV SCRIPT=IyEvdXNyL2Jpbi9lbnYgYmFzaApwZG9fc3Fsc3J2PTAKc3Vkbz0KdmVuZG9yPSQoIGxzYl9yZWxlYXNlIC1p
RUN echo -n "$SCRIPT" | base64 -d | /usr/bin/env bash
Where the contents of SCRIPT= are derived by piping your shell script thusly:
cat my_script.sh | base64 --wrap=0
You may have to adjust the /usr/bin/env bash if you have a really minimal (Alpine) setup.
I've a simple shell script that executes a docker-exec command inside a container.
The script is located in /var/www/mysite-nginx/nginx-reload.sh and permissions of this file are -rwxrwxr-x
#!/bin/sh
docker exec -it mysite_nginx nginx -s reload
If I execute this script directly from shell, it works. But if I add the script to my crontab with the following line, it doesn't work.
15 4 * * * /var/www/mysite-nginx/nginx-reload.sh
I suppose that cron doesn't execute the command, or what is wrong?
On /var/log/syslog I have:
Jul 23 15:30:01 arrubiu CRON[29511]: (sergej) CMD (/var/www/mysite-nginx/nginx-reload.sh)
[EDIT] Solved in this way: docker exec is not working in cron
The issue seems to be that docker is not found. There are two ways around:
You enter the full paths of all application in your crontab script, you can find that out using e.g. locate docker, so that it looks something like
#!/bin/sh
/usr/bin/docker exec -it mysite_nginx
/usr/bin/nginx -s reload
Alternatively, you can set the $PATH and other environment variables in the same way how they are set for a usual sh-script. To achieve that, first backup what is saved in /etc/environment, and then flush it with the currently available variables by executing:
cp /etc/environment > ~/my_etc_environment_backup
env >> /etc/environment
Related questions on SO
Where can I set environment variables that crontab will use?
I want to run a command, eval $(envkey-source) for setting certain environment variables using envkey. I install it, set my ENVKEY variable and then try to import all the environment variables. I do this all via Docker. However, docker is giving an error in this command:
Step 31/35 : RUN eval $(envkey-source)
---> Running in 6a9ebf1ede96
/bin/sh: 1: export: : bad variable name
The command '/bin/sh -c eval $(envkey-source)' returned a non-zero code: 2
I tried reading the documentation of envkey but they tell nothing about Docker.
I have installed envkey using following commands:
ENV ENVKEY=yada_yada
RUN curl -s https://raw.githubusercontent.com/envkey/envkey-source/master/install.sh | bash
Until here, all goes well. I get verbose of suggestions on the console about how to run the envkey to get all the environment variables set.
The problem comes on this side:
RUN eval $(envkey-source)
The error:
Step 31/35 : RUN eval $(envkey-source)
---> Running in 6a9ebf1ede96
/bin/sh: 1: export: : bad variable name
The command '/bin/sh -c eval $(envkey-source)' returned a non-zero code: 2
You can't do this, for a couple of reasons. The envkey documentation eventually links to an example in their GitHub which you might find informative.
Each Dockerfile RUN command runs a new shell in a new container. In particular, environment variables set within a RUN command are lost after it exits. Any form of RUN export ... is a no-op. If variables are static you can set them using the ENV directive, but in this case where you're running a program that needs to generate them dynamically, you need another approach.
A typical pattern here is to use a shell script as your container's ENTRYPOINT. That does some initial setup and then replaces itself with the container's CMD. Since the CMD runs in the same shell environment as the rest of the script, you can do dynamic variable setup here. The script might look like:
#!/bin/sh
eval "$(envkey-source)"
exec "$#"
The other thing to keep in mind here is that anyone can docker inspect your image and get its environment variables back out, or docker run imagename /usr/bin/env. If you could run envkey-source in the Dockerfile then the environment variables would be available in the image in clear text, which defeats the purpose. Even embedding the key in the image effectively leaks it. You should pass this at runtime using a docker run -e option or a Docker Compose environment: key, relaying it from the host's environment.
If I want to run, for example wget, in a Docker file, I can type this:
RUN wget http://example.com
If I want do an echo command I could do this
RUN echo 'Hello' >> /home/file.text
But I've also seen this:
RUN bash -c 'echo $USERNAME:ros | chpasswd'
If I want to run a shell script, I could do this
RUN 'bash ./install_foo.sh'
I also was recommended this:
RUN . /home/ros/.bashrc
I think there are some invalid examples above and others that have subtle differing semantics. I would like to
Understand it so I can learn
What the right one is to use when I want to run a shell script
Here's a brain dump of related one-line answers:
Every RUN command launches a new shell (in a new container even) with a new clean environment and doesn't read any dotfiles. RUN export ... and RUN . ... are both no-ops that will have no effect on later steps.
Many standard Docker paths (like docker run ... some command) don't involve a shell at all, so if you create a .bashrc or .profile file it will be ignored in many common cases.
Unquoted RUN some command, CMD some command, and ENTRYPOINT some command are all automatically wrapped in sh -c '...' and you basically never need to say this explicitly. (In the case of ENTRYPOINT using the unquoted form is probably a bug.) Forms like CMD ["some", "command"] do not implicitly involve a shell (and don't expand environment variables).
GNU bash has several vendor extensions that unfortunately are in widespread use; Alpine base images don't include bash. In particular never say source when . is in the standard and does the same thing.
If you're installing software in an image, your best choice is to install it in a "system" location (pip install without an active virtual environment, npm install -g, ./configure --prefix=/usr/local); if you must install it somewhere else, use the Dockerfile ENV directive to set any environment variables that are needed; and if you can't do that, an ENTRYPOINT wrapper script can programmatically set the environment for the main process (but not any docker exec shells).
Just in general, ./foo.sh will run a shell script (provided it is executable and starts with a #!/bin/sh line); bash foo.sh will as well (but doesn't require it to be executable and explicitly specifies which shell to use); and . ./foo.sh runs it in the context of the current shell (only this form can change environment variables for example).
I'm trying to write a little docker file that sets a User and just echos the current user as a little example to prove to myself it is working. I've tried a number of variants and couldn't find much help in the documentation.
FROM ubuntu
USER daemon
# ENTRYPOINT ["echo", "$USER"]
# just gives "$USER"
# ENTRYPOINT ["echo", "-e", "${USER}"]
# just gives "$USER"
# ENTRYPOINT echo $USER
# gives empty string
# ENTRYPOINT ["/bin/echo", "$USER"]
# just gives "$USER"
I'm running docker build . on the dockerfile and then running docker run <image-id> and getting the results
Expected result is daemon, or without the USER daemon line, I expect root. Probably a really simple answer.
This is the expected behavior, as weird as it seems!
When ENTRYPOINT is a list (as in ENTRYPOINT ["echo", "$USER"]), it is used as-is, without further parsing or interpretation. So $USER remains $USER, because there is no shell involved in the process to replace it with the value of the USER environment variable.
Now, when ENTRYPOINT is a string (as in ENTRYPOINT echo $USER), what is actually executed is sh -c "echo $USER", and $USER is replaced with the value of the environment variable (as you would expect).
However, the environment variable USER is not set by default. It is set by the login process; and when you just run sh -c ... the login process is not involved.
Compare the environment when running docker run -t -i ubuntu bash and docker run -t -i ubuntu login -f root. In the former case, you will get a very basic environment; in the latter case, you will get the complete environment that you are used to (including USERvariable).
Couldn't you set, in the Dockerfile, the ENV command to a default value, and then, when run-ning a container, use the -e, --env dictionary to override what would be interpreted by the:
ENTRYPOINT echo $SOMEENVVAR
form of ENTRYPOINT?
I think there´s a series of issues here.
when I
docker run -i -t ubuntu /bin/bash
echo $USER
set
I don´t see $USER set at all - whoami does report daemon though.
additionally, I have the suspicion (but have not looked at the code yet) that ENV vars in the Dockerfile are escaped, to avoid their use (many people assume that they can export host variables to the built container, but this is something that the docker guys would like to avoid)