set environment variable from sh script in systemd service file - environment-variables

i try to use ready-made bash script that set env
this is the service that i try to use :
[Unit]
Description=myserver service
After=multi-user.target
[Service]
Type=simple
User=ec2-user
Group=ec2-user
WorkingDirectory=/home/ec2-user/myserver/
ExecStart=/bin/sh -c '/home/ec2-user/myserver/config/myserverVars.sh ;/home/ec2-user/venv/bin/python /home/ec2-user/myserver/myserver.py 2>&1 >> /home/ec2-user/myserver/logs/systemd_myserver.log'
StandardOutput=append:/home/ec2-user/myserver/logs/systemd_stdout.log
StandardError=append:/home/ec2-user/myserver/logs/systemd_stderr.log
[Install]
WantedBy=multi-user.target
the myserverVars.sh:
#!/bin/bash
export APP1=foo#gmail.com
export APP2_BIND_PASS=xxxxxx
export APP3=xxxxxx
the variables in /home/ec2-user/myserver/config/myserverVars.sh
are never set, and the server is started without the variables and this is wrong ,
i trying to avoid using Environment key or EnvironmentFile

When you run /home/ec2-user/myserver/config/myserverVars.sh, it is run in a new process which exits when it finishes, so all the changes to the environment are lost. You need to ask the current shell to execute the script without starting a new process. This is done with the source command, which is also available as the "dot" command: .. So use
/bin/sh -c 'source /home/ec2-user/myserver/config/myserverVars.sh; ...'

Related

Docker entrypoint script not sourcing file

I have an entrypoint script with docker which is getting executed. However, it just doesn't run the source command to source a file full of env values.
Here's the relevant section from tehe dockerfile
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["-production"]
I have tried 2 version of entrypoint script. Neither of them are working.
VERSION 1
#!/bin/bash
cat >> /etc/bash.bashrc <<EOF
if [[ -f "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env" ]]
then
echo "${SERVICE_NAME}.env found ..."
set -a
source "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
set +a
fi
EOF
echo "INFO: Starting ${SERVICE_NAME} application, environment:"
exec -a $SERVICE_NAME node .
VERSION 2
ENV_FILE=/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env
if [[] -f "$ENV_FILE" ]; then
echo "INFO: Loading environment variables from file: ${ENV_FILE}"
set -a
source $ENV_FILE
set +a
fi
echo "INFO: Starting ${SERVICE_NAME} application..."
exec -a $SERVICE_NAME node .
Version 2 of above prints to the log that it has found the file however, source command simply isn't loading the contents of file into memory. I check if contents have been loaded by running the env command.
I've been trying few things for 3 days now with no progress. Please can someone help me? Please note I am new to docker which is making things quite difficult.
I think your second version is almost there.
Normally Docker doesn't read or use shell dotfiles at all. This isn't anything particular to Docker, just that you're not running an "interactive" or "login" shell at any point in the sequence. In your first form you write out a .bashrc file but then exec node, and nothing there ever re-reads the dotfile.
You mention in the question that you use the env command to check the environment. If this is via docker exec, that launches a new process inside the container, but it's not a child of the entrypoint script, so any setup that happens there won't be visible to docker exec. This usually isn't a problem.
I can suggest a couple of cleanups that might make it a little easier to see the effects of this. The biggest is to split out the node invocation from the entrypoint script. If you have both an ENTRYPOINT and a CMD then Docker passes the CMD as arguments to the ENTRYPOINT; if you change the entrypoint script to end with exec "$#" then it will run whatever it got passed.
#!/bin/sh
# (trying to avoid bash-specific constructs)
# Read the environment file
ENV_FILE="/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
if [[ -f "$ENV_FILE" ]; then
. $ENV_FILE
fi
# Run the main container command
exec "$#"
And then in the Dockerfile, put the node invocation as the main command
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array syntax
CMD ["node", "."] # could be shell-command syntax
The important thing with this is that it's easy to override the command but leave the entrypoint intact. So if you run
docker run --rm your-image env
that will launch a temporary container, but passing env as the command instead of node .. That will go through the steps in the entrypoint script, including setting up the environment, but then print out the environment and exit immediately. That will let you observe the changes.

Docker container environment variable file during runtime

I have a docker image that basically schedules a cron job at a frequency defined when building the image using the below.
COPY myjobtime /etc/cron.d/myjobtime
RUN chmod 0644 /etc/cron.d/myjobtime &&\
crontab /etc/cron.d/myjobtime
CMD cron
I have the cron entry in the file myjobtime.
*/10 * * * * /usr/local/bin/sh /app/myscript.py
I would like to be able to pass the cron schedule during the runtime. Meaning, if someone wants to modify the cron schedule to a different frequency, they should be able to do that while running the container and passing an environment variable file with the new cron schedule in it. Can this be done?
The important detail is that you need to create and install the crontab file when the container starts up. I find an entrypoint wrapper script to be a useful pattern for this: set the image's ENTRYPOINT to be a shell script that does whatever first-time setup is required, then have it exec "$#" to run the image's CMD.
If your image is ultimately based on a Linux distribution based on the GNU toolset, then envsubst is a really helpful program here. It reads in a text file, expands environment variable references, and writes out the result. I'll assume you have this available; on Alpine-based images you can do similar tricks with sed(1) (though escaping around the cron schedule will become tricky).
This makes the entrypoint wrapper script something like:
#!/bin/sh
# entrypoint.sh
# Set a default schedule, if the user didn't provide one
if [ -z "$CRON_SCHEDULE" ]; then
export CRON_SCHEDULE='*/10 * * * *'
fi
# Run substitutions on the template file and inject the crontab
envsubst < /app/myjobtime.cron.tmpl | crontab
# Run the main container command
exec "$#"
Since the template isn't a "normal" crontab, it can't go in the "normal" crontab directory; putting it in the application directory is fine. That file has an environment variable reference where the schedule would go
# myjobtime.cron.tmpl
${CRON_SCHEDULE} /app/myscript.py
In your image, set the wrapper script to be the ENTRYPOINT, make sure the template file is in the right place, and leave the CMD unchanged.
# (assuming there's not a broad `COPY . .`)
COPY myjobtime.cron.tmpl .
COPY entrypoint.sh .
ENTRYPOINT ["/app/entrypoint.sh"] # must be JSON-array syntax
CMD cron # unchanged
This should allow you to override the cron schedule.
docker run -d --name hourly myappcron
docker run -d --name daily -e 'CRON_SCHEDULE=0 0 * * *' myappcron
Since the entrypoint wrapper script runs whatever command was provided, and you can override the command pretty easily, this also lets you double-check that the right schedule got set.
docker run --rm -e 'CRON_SCHEDULE=0 0 * * *' myappcron \
crontab -l # runs instead of the cron daemon

Custom shell script in crontab

I've a simple shell script that executes a docker-exec command inside a container.
The script is located in /var/www/mysite-nginx/nginx-reload.sh and permissions of this file are -rwxrwxr-x
#!/bin/sh
docker exec -it mysite_nginx nginx -s reload
If I execute this script directly from shell, it works. But if I add the script to my crontab with the following line, it doesn't work.
15 4 * * * /var/www/mysite-nginx/nginx-reload.sh
I suppose that cron doesn't execute the command, or what is wrong?
On /var/log/syslog I have:
Jul 23 15:30:01 arrubiu CRON[29511]: (sergej) CMD (/var/www/mysite-nginx/nginx-reload.sh)
[EDIT] Solved in this way: docker exec is not working in cron
The issue seems to be that docker is not found. There are two ways around:
You enter the full paths of all application in your crontab script, you can find that out using e.g. locate docker, so that it looks something like
#!/bin/sh
/usr/bin/docker exec -it mysite_nginx
/usr/bin/nginx -s reload
Alternatively, you can set the $PATH and other environment variables in the same way how they are set for a usual sh-script. To achieve that, first backup what is saved in /etc/environment, and then flush it with the currently available variables by executing:
cp /etc/environment > ~/my_etc_environment_backup
env >> /etc/environment
Related questions on SO
Where can I set environment variables that crontab will use?

How to set environment variable in docker container system wide at container start for all users?

I need to set some environment variable for all users and processes inside docker container. It should be set at container start, not in Dockerfile, because it depends on running environment.
So the simple Dockerfile
FROM ubuntu
RUN echo 'export TEST=test' >> '/root/.bashrc'
works well for interactive sessions
docker run -ti test bash
then
env
and there is TEST=test
but when docker run -ti test env there is no TEST
I was trying
RUN echo 'export TEST=test' >> '/etc/environment'
RUN echo 'TEST="test"' >> '/etc/environment'
RUN echo 'export TEST=test' >> /etc/profile.d/1.sh
ENTRYPOINT export TEST=test
Nothing helps.
Why I need this. I have http_proxy variable inside container automatically set by docker, I need to set another variables, based on it, i.e. JAVA_OPT, do it system wide, for all users and processes, and in running environment, not at build time.
I would create a script which would be an entrypoint:
#!/bin/bash
# if env variable is not set, set it
if [ -z $VAR ];
then
# env variable is not set
export VAR=$(a command that gives the var value);
fi
# pass the arguments received by the entrypoint.sh
# to /bin/bash with command (-c) option
/bin/bash -c $#
And in Dockerfile I would set the entrypoint:
ENTRYPOINT entrypoint.sh
Now every time I run docker run -it <image> <any command> it uses my script as entrypoint so will always run it before the command then pass the arguments to the right place which is /bin/bash.
Improvements
The above script is enough to work if you are always using the entrypoint with arguments, otherwise your $# variable will be empty and will give you an error /bin/bash: -c: option requires an argument. A easy fix is an if statement:
if [ ! -z $# ];
then
/bin/bash -c $#;
fi
Setting the parameter in ENTRYPOINT would solve this issue.
In docker file pass parameter in ENTRYPOINT

How do I run the eval $(envkey-source) command in docker using Dockerfile?

I want to run a command, eval $(envkey-source) for setting certain environment variables using envkey. I install it, set my ENVKEY variable and then try to import all the environment variables. I do this all via Docker. However, docker is giving an error in this command:
Step 31/35 : RUN eval $(envkey-source)
---> Running in 6a9ebf1ede96
/bin/sh: 1: export: : bad variable name
The command '/bin/sh -c eval $(envkey-source)' returned a non-zero code: 2
I tried reading the documentation of envkey but they tell nothing about Docker.
I have installed envkey using following commands:
ENV ENVKEY=yada_yada
RUN curl -s https://raw.githubusercontent.com/envkey/envkey-source/master/install.sh | bash
Until here, all goes well. I get verbose of suggestions on the console about how to run the envkey to get all the environment variables set.
The problem comes on this side:
RUN eval $(envkey-source)
The error:
Step 31/35 : RUN eval $(envkey-source)
---> Running in 6a9ebf1ede96
/bin/sh: 1: export: : bad variable name
The command '/bin/sh -c eval $(envkey-source)' returned a non-zero code: 2
You can't do this, for a couple of reasons. The envkey documentation eventually links to an example in their GitHub which you might find informative.
Each Dockerfile RUN command runs a new shell in a new container. In particular, environment variables set within a RUN command are lost after it exits. Any form of RUN export ... is a no-op. If variables are static you can set them using the ENV directive, but in this case where you're running a program that needs to generate them dynamically, you need another approach.
A typical pattern here is to use a shell script as your container's ENTRYPOINT. That does some initial setup and then replaces itself with the container's CMD. Since the CMD runs in the same shell environment as the rest of the script, you can do dynamic variable setup here. The script might look like:
#!/bin/sh
eval "$(envkey-source)"
exec "$#"
The other thing to keep in mind here is that anyone can docker inspect your image and get its environment variables back out, or docker run imagename /usr/bin/env. If you could run envkey-source in the Dockerfile then the environment variables would be available in the image in clear text, which defeats the purpose. Even embedding the key in the image effectively leaks it. You should pass this at runtime using a docker run -e option or a Docker Compose environment: key, relaying it from the host's environment.

Resources