1) I am running a docker container with following cmd (passing few env variables with -e option)
$ docker run --name=xyz -d -e CONTAINER_NAME=xyz -e SSH_PORT=22 -e NWMODE=HOST -e XDG_RUNTIME_DIR=/run/user/0 --net=host -v /mnt:/mnt -v /dev:/dev -v /etc/sysconfig/network-scripts:/etc/sysconfig/network-scripts -v /:/hostroot/ -v /etc/hostname:/etc/host_hostname -v /etc/localtime:/etc/localtime -v /var/run/docker.sock:/var/run/docker.sock --privileged=true cf3681e04bfb
2) After running the container as above, i check the env variable NWMODE inside the container, and it shows correctly as shown below :
$ docker exec -it xyz bash
$ env | grep NWMODE
NWMODE=HOST
3) Now, i created a sample service 'b' shown below which executes a script b.sh (where i try to access NWMODE) :
root#ubuntu16:/etc/systemd/system# cat b.service
[Unit]
Description=testing service b
[Service]
ExecStart=/bin/bash /etc/systemd/system/b.sh
root#ubuntu16:/etc/systemd/system# cat b.sh
#!/bin/bash`
systemctl import-environment
echo "NWMODE:" $NWMODE`
4) Now if i start service 'b' and see its logs, it shows that it is not able to access NWMODE env variable
$ systemctl start b
$ journalctl -fu b
...
systemd[1]: Started testing service b.
bash[641]: NWMODE: //blank for $NWMODE here`
5) Now rather than having 'systemctl import-environment' in b.sh, if i do following then the b.service logs show the correct value of NWMODE env variable:
$ systemctl import-environment
$ systemctl start b
Though the step 5 above works i can't go for it, as all the services in my system will be started automatically by systemd. In that case, can anyone please let me know how can i access the environment variables (passed using 'docker run...' cmd above) in a service file (say for e.g. in b.sh above). Can this be achieved somehow with systemctl import-environment or there is some other way ?
systemd unsets all environment variables to provide a clean environment. Afaik that is intended to be a security feature.
Workaround: Create a file /etc/systemd/system.conf.d/myenvironment.conf:
[Manager]
DefaultEnvironment=CONTAINER_NAME=xyz NWMODE=HOST XDG_RUNTIME_DIR=/run/user/0
systemd will set the environment variables declared in this file.
You can set up an ENTRYPOINT script that automatically creates this file before running systemd. Example:
RUN echo '#! /bin/bash \n\
echo "[Manager] \n\
DefaultEnvironment=$(while read -r Line; do echo -n "$Line" ; done < <(env) \n\
" >/etc/systemd/system.conf.d/myenvironment.conf \n\
exec /lib/systemd/systemd \n\
' >/usr/local/bin/setmyenv && chmod +x /usr/bin/setmyenv
ENTRYPOINT /usr/bin/setmyenv
Instead of creating the script within Dockerfile you can store it outside and add it with COPY:
#! /bin/bash
echo "[Manager]
DefaultEnvironment=$(while read -r Line; do echo -n "$Line" ; done < <(env)
" >/etc/systemd/system.conf.d/myenvironment.conf
exec /lib/systemd/systemd
TL;DR
Run the the command using bash, first store the docker environment variables to a file (or just pipe them two awk), extract & export the variable and finally run your main script.
ExecStart=/bin/bash -c "cat /proc/1/environ | tr '\0' '\n' > /home/env_file; export MY_ENV_VARIABLE=$(awk -F= -v key="MY_ENV_VARIABLE" '$1==key {print $2}' /home/env_file); /usr/bin/python3 /usr/bin/my_python_script.py"
Whatever #mviereck is saying is true, still I have found another solution to this problem.
My use case is to pass an environment variable to my system-d container in the Docker run command (docker run -e MY_ENV_VARIABLE="some_val") and use that in the python script that is run through the system-d unit file.
According to this post (https://forums.docker.com/t/where-are-stored-the-environment-variables/65762) the container environment variables can be found in the running process /proc/1/environ inside the container. Performing a cat does show that the environment variable MY_ENV_VARIABLE=some_val does exist, though in some mangled form.
$ cat /proc/1/environ
HOSTNAME=271fbnd986bdMY_ENV_VARIABLE=some_valcontainer=dockerLC_ALL=CDEBIAN_FRONTEND=noninteractiveHOME=/rootroot#271fb0d986bd
The main task now would be to extract MY_ENV_VARIABLE="some_val" value and pass it to the ExecStart directive in the system-d unit file.
(extraction code referenced from How to grep for value in a key-value store from plain text)
# this outputs a nice key,value pair
$ cat /proc/1/environ | tr '\0' '\n'
HOSTNAME=861f23cd1b33
MY_ENV_VARIABLE=some_val
container=docker
LC_ALL=C
DEBIAN_FRONTEND=noninteractive
HOME=/root
# we can store this in a file for use, too
$ cat /proc/1/environ | tr '\0' '\n' > /home/env_var_file
# we can then reuse the file to extract the value of interest against a key
$ awk -F= -v key="MY_ENV_VARIABLE" '$1==key {print $2}' /home/env_file
some_val
Now in the ExecStart directive in the system-d unit file we can do this:
[Service]
Type=simple
ExecStart=/bin/bash -c "cat /proc/1/environ | tr '\0' '\n' > /home/env_file; export MY_ENV_VARIABLE=$(awk -F= -v key="MY_ENV_VARIABLE" '$1==key {print $2}' /home/env_file); /usr/bin/python3 /usr/bin/my_python_script.py"
Related
I'm trying to run simple script inside docker container after start. Initialy previous developer decided to use s6 inside.
#!/usr/bin/execlineb -P
foreground { sleep 2 }
nginx
When i'm trying to start i'm gettings this message
execlineb: usage: execlineb [ -p | -P | -S nmin | -s nmin ] [ -q | -w | -W ] [ -c commandline ] script args
Looks like something wrong with executing this scripts or with execline.
I'm using docker for windows under windows10, however if somebody else trying to build this container in ubuntu(or any othe linux) evething is ok.
Can anybody help with this kind of problem?
DockerImage: simple alpine
According to our research of this "HUGE" problem we found two ways to solve it. Definitely it's a problem with special symbols, like '\r'
Option 1 dostounix:
install dostounix in your container(in docker file)
RUN apk --no-cache add \
dos2unix \
run it againts your sh script.
RUN for file in {PathToYourFiles}; do \
dos2unix $file; \
chmod a+xwr $file; \
done
enjoy your scripts.
Option 2 VsCode(or any textEditor):
Change CRLF 'End Of Line Sequence' to LF
VS Code bottom panel
Line endings options
enjoy your scripts.
I'm creating a Dockerfile that needs to execute a command, let's call it foo
In order to execute foo, I need to create a .cfc in current directory with token information to call this foo service.
So basically I should do something like
ENV FOO_TOKEN token
ENV FOO_HOST host
ENV FOO_SHARED_DIRECTORY directory
ENV LIBS_TARGET target
and then put the first three variables in a .cfg file and then launch a command using the last variable as target.
Given that if run more than one CMD in a Dockerfile, only the last one will be considered, how should I do that?
My ideal execution is docker run -e "FOO_TOKEN=aaaaaaa" -e "FOO_HOST=myhost" -e "FOO_SHARED_DIRECTORY=Shared" -e "LIBS_TARGET=target/scala-2.11/*.jar" -it --rm --name my-ci-deploy foo/foo:latest
If you wanted to keep everything in the Dockerfile (something I think is rather desirable), you can do something nasty like:
ENV SCRIPT=IyEvdXNyL2Jpbi9lbnYgYmFzaApwZG9fc3Fsc3J2PTAKc3Vkbz0KdmVuZG9yPSQoIGxzYl9yZWxlYXNlIC1p
RUN echo -n "$SCRIPT" | base64 -d | /usr/bin/env bash
Where the contents of SCRIPT= are derived by piping your shell script thusly:
cat my_script.sh | base64 --wrap=0
You may have to adjust the /usr/bin/env bash if you have a really minimal (Alpine) setup.
I'm fairly new to node and nginx. I've a task of building a simple webserver which host dynamic contents. A very crucial part of the webserver is to take inputs from user about ports to be used , any custom domain to be used (in place of localhost) , SSL certificates etc. from installer [Its supposed to be built for docker ] but I have no idea how to execute a script such that is passes the variable entered by user ( like $SERVER_URI) to nginx.conf and node file and overwrite current data
I will suggest to create a config file and read the value from them so everything will be dynamic.
Here is how you can achieve SSL certificate and other ENV and port dynamically also docker name and image name will be get and set.
Create file docker.config which contain ports, ENV, path mapping, hosts values and links if you wish to link container. leave them blank
if you do not need them. remove host_port:container_port this entry
just for comment purpose.
docker.config
START_PORT_MAPPINGS
host_port:container_port
8080:80
END_PORT_MAPPINGS
START_PATH_MAPPINGS
/path_to_code/:/var/www/htlm/test
/path_to_nginx_config1:/etc/nginx/nginx.conf
/path_to_ssl_certs:/container_path_to_Certs
END_PATH_MAPPINGS
START_LINKING
db:db-server
END_LINKING
START_HOST_MAPPINGS
test.com:192.168.1.23
test2.com:192.168.1.23
END_HOST_MAPPINGS
START_ENV_VARS
MYSQL_ROOT_PASSWORD=1234
OTHER_ENV_VAR=value
END_ENV_VARS
create start.sh this will read the values from docker.config and will run command your docker container.
Need two arguments 1st: docker name and 2nd: image name.
function read_connfig() {
docker_name="${1}"
input="docker.config"
option_key=$(echo "${2}" | cut -d':' -f1)
config_name=$(echo "${2}" | cut -d':' -f2)
post_fix=$(echo "${2}" | cut -d':' -f3)
while IFS=$' \t\n\r' read -r line; do
if [[ $line == END_"${config_name}" ]] ; then
read_prop="no"
fi
if [[ $read_prop == "yes" ]] ; then
echo -n "${option_key}${line}${post_fix} "
fi
if [[ $line == START_"${config_name}" ]] ; then
read_prop="yes"
fi
done < "$input"
}
function get_run_configs() {
docker_name=${1}
declare -a configs=("-p :PORT_MAPPINGS:" "-v :PATH_MAPPINGS:" "--add-host=:HOST_MAPPINGS:" "-e :ENV_VARS:" "--link :LINKING:")
local run_command=""
for config in "${configs[#]}"
do
config_vals=($(read_connfig "${docker_name}" "${config}"))
if [ ! -z "${config_vals}" ];
then
for config_val in "${config_vals[#]}"
do
run_command="${run_command} ${config_val}"
done
else
echo >&2 "No config found for ${config}"
fi
done
echo "${run_command}"
}
container_name=$1;
image_name=$2
docker_command=$(get_run_configs $docker_name)
echo $docker_command
docker run --name $container_name $docker_command -dit $image_name
Resulting command will be. ./start.sh test test
docker run --name test -p host_port:container_port -p 8080:80 -v /path_to_code/:/var/www/htlm/test -v /path_to_nginx_config1:/etc/nginx/nginx.conf -v /path_to_ssl_certs:/container_path_to_Certs --add-host=test.com:192.168.1.23 --add-host=test2.com:192.168.1.23 -e MYSQL_ROOT_PASSWORD=1234 -e OTHER_ENV_VAR=value --link db:db-server -dit test
When issuing grunt shell:test, I'm getting warning "the input device is not a TTY" & don't want to have to use -f:
$ grunt shell:test
Running "shell:test" (shell) task
the input device is not a TTY
Warning: Command failed: /bin/sh -c ./run.sh npm test
the input device is not a TTY
Use --force to continue.
Aborted due to warnings.
Here's the Gruntfile.js command:
shell: {
test: {
command: './run.sh npm test'
}
Here's run.sh:
#!/bin/sh
# should use the latest available image to validate, but not LATEST
if [ -f .env ]; then
RUN_ENV_FILE='--env-file .env'
fi
docker run $RUN_ENV_FILE -it --rm --user node -v "$PWD":/app -w /app yaktor/node:0.39.0 $#
Here's the relevant package.json scripts with command test:
"scripts": {
"test": "mocha --color=true -R spec test/*.test.js && npm run lint"
}
How can I get grunt to make docker happy with a TTY? Executing ./run.sh npm test outside of grunt works fine:
$ ./run.sh npm test
> yaktor#0.59.2-pre.0 test /app
> mocha --color=true -R spec test/*.test.js && npm run lint
[snip]
105 passing (3s)
> yaktor#0.59.2-pre.0 lint /app
> standard --verbose
Remove the -t from the docker run command:
docker run $RUN_ENV_FILE -i --rm --user node -v "$PWD":/app -w /app yaktor/node:0.39.0 $#
The -t tells docker to configure the tty, which won't work if you don't have a tty and try to attach to the container (default when you don't do a -d).
This solved an annoying issue for me. The script had these lines:
docker exec **-it** $( docker ps | grep mysql | cut -d' ' -f1) mysql --user= ..... > /var/tmp/temp.file
mutt -s "File is here" someone#somewhere.com < /var/tmp/temp.file
The script would run great if run directly and the mail would come with the correct output. However, when run from cron, (crontab -e) the mail would come with no content. Tried many things around permissions and shells and paths etc. However no joy!
Finally found this:
*/20 * * * * scriptblah.sh > $HOME/cron.log 2>&1
And on that cron.log file found this output:
the input device is not a TTY
Search led me here. And after I removed the -t, it's working great now!
docker exec **-i** $( docker ps | grep mysql | cut -d' ' -f1) mysql --user= ..... > /var/tmp/temp.file
How can I get /etc/profile to run automatically when starting an Alpine Docker container interactively? I have added some aliases to an aliases.sh file and placed it in /etc/profile.d, but when I start the container using docker run -it [my_container] sh, my aliases aren't active. I have to manually type . /etc/profile from the command line each time.
Is there some other configuration necessary to get /etc/profile to run at login? I've also had problems with using a ~/.profile file. Any insight is appreciated!
EDIT:
Based on VonC's answer, I pulled and ran his example ruby container. Here is what I got:
$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42
/ # more /etc/profile.d/rubygems.sh
export PATH=$PATH:/usr/lib/ruby/gems/2.0.0/bin
/ # env
no_proxy=*.local, 169.254/16
HOSTNAME=6c7e93ebc5a1
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
/ # exit
Although the /etc/profile.d/rubygems.sh file exists, it is not being run when I login and my PATH environment variable is not being updated. Am I using the wrong docker run command? Is something else missing? Has anyone gotten ~/.profile or /etc/profile.d/ files to work with Alpine on Docker? Thanks!
The default shell in Alpine Linux is ash.
Ash will only read the /etc/profile and ~/.profile files if it is started as a login shell sh -l.
To force Ash to source the /etc/profile or any other script you want upon its invocation as a non login shell, you need to setup an environment variable called ENV before launching Ash.
e.g. in your Dockerfile
FROM alpine:3.5
ENV ENV="/root/.ashrc"
RUN echo "echo 'Hello, world!'" > "$ENV"
When you build that you get:
deployer#ubuntu-1604-amd64:~/blah$ docker build --tag test .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM alpine:3.5
3.5: Pulling from library/alpine
627beaf3eaaf: Pull complete
Digest: sha256:58e1a1bb75db1b5a24a462dd5e2915277ea06438c3f105138f97eb53149673c4
Status: Downloaded newer image for alpine:3.5
---> 4a415e366388
Step 2/3 : ENV ENV "/root/.ashrc"
---> Running in a9b6ff7303c2
---> 8d4af0b7839d
Removing intermediate container a9b6ff7303c2
Step 3/3 : RUN echo "echo 'Hello, world!'" > "$ENV"
---> Running in 57c2fd3353f3
---> 2cee6e034546
Removing intermediate container 57c2fd3353f3
Successfully built 2cee6e034546
Finally, when you run the newly generated container, you get:
deployer#ubuntu-1604-amd64:~/blah$ docker run -ti test /bin/sh
Hello, world!
/ # exit
Notice the Ash shell didn't run as a login shell.
So to answer your query, replace
ENV ENV="/root/.ashrc"
with:
ENV ENV="/etc/profile"
and Alpine Linux's Ash shell will automatically source the /etc/profile script each time the shell is launched.
Gotcha: /etc/profile is normally meant to only be sourced once! So, I would advise that you don't source it and instead source a /root/.somercfile instead.
Source: https://stackoverflow.com/a/40538356
You still can try in your Dockerfile a:
RUN echo '\
. /etc/profile ; \
' >> /root/.profile
(assuming the current user is root. If not, replace /root with the full home path)
That being said, those /etc/profile.d/xx.sh should run.
See codeclimate/docker-alpine-ruby as an example:
COPY files /
With 'files/etc" including an files/etc/profile.d/rubygems.sh running just fine.
In the OP project Dockerfile, there is a
COPY aliases.sh /etc/profile.d/
But the default shell is not a login shell (sh -l), which means profile files (or those in /etc/profile.d) are not sourced.
Adding sh -l would work:
docker#default:~$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42 sh -l
87a58e26b744:/# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/ruby/gems/2.0.0/bin
As mentioned by Jinesh before, the default shell in Alpine Linux is ash
localhost:~$ echo $SHELL
/bin/ash
localhost:~$
Therefore simple solution is too add your aliases in .profile. In this case, I put all my aliases in ~/.ash_aliases
localhost:~$ cat .profile
# ~/.profile
# Alias
if [ -f ~/.ash_aliases ]; then
. ~/.ash_aliases
fi
localhost:~$
.ash_aliases file
localhost:~$ cat .ash_aliases
alias a=alias
alias c=clear
alias f=file
alias g=grep
alias l='ls -lh'
localhost:~$
And it works :)
I use this:
docker exec -it my_container /bin/ash '-l'
The -l flag passed to ash will make it behave as a login shell, thus reading ~/.profile