Passing variables from sh to expect not working with docker-compose - docker

The Dockerfile contains:
ENV VAR 1
COPY ./setup.exp /tmp/
RUN chmod a+x /tmp/setup.exp
The expect file:
#!/usr/bin/expect
set timeout -1
spawn setup -v
expect "Enter variable: "
send -- "$env(VAR)\r"
The shell file (main.sh):
#!/bin/sh
/tmp/setup.exp $VAR
When I run ./main.sh from the shell inside the container, it works perfectly fine.
However, when I run docker-compose up with entrypoint: ./main.sh, it prints this error:
send: spawn id exp4 not open
while executing
"send -- "$env(VAR)\r""
(file "/tmp/setup.exp" line 5)
If I pass the variable directly as entrypoint: /tmp/setup.exp ${VAR}, it prints this warning:
WARNING: The VAR variable is not set. Defaulting to a blank string.
I also tried set VAR [lindex $argv 0]; and then send -- "$VAR\r" without any success.
Seems like from inside the container the script is able to load docker's env variables.
Any suggestions?

Thanks #glennjackman for pointing that out. I ran expect -d for a more verbose output.
It turns out the reason the program exited is because python was raising this exception:
ValueError: invalid width 0 (must be > 0)
Setting the variable say ENV COLUMNS 100 in the Dockerfile solved the issue for me.

Related

docker-compose load multiple env file

I want to make docker-compose (v2.12.2) load env variables from both .env and .env.local
I tried the method mentioned here:
docker-compose --env-file <(cat "./.env" && ([ -f "./.env.local" ] && cat "./.env.local" || echo '')) up -d
But it seems that no envs are loaded and plenty of warnings are thrown:
WARN[0000] The "PGADMIN_DEFAULT_EMAIL" variable is not set. Defaulting to a blank string.
WARN[0000] The "PGADMIN_DEFAULT_PASSWORD" variable is not set. Defaulting to a blank string.
WARN[0000] The "PGADMIN_HOST_PORT" variable is not set. Defaulting to a blank string.
...
Don't know why. Any help would be appreciated.
vim <(cat "./.env" && ([ -f "./.env.local" ] && cat "./.env.local" || echo ''))
vim shows that two env files are concatenated correctly:
PGADMIN_DEFAULT_EMAIL=example#example.com
PGADMIN_DEFAULT_PASSWORD=example
PGADMIN_HOST_PORT=8080
PGADMIN_HOST_PORT=8081
OS: centOS 7
You could try to use a little script like this:
I have created two test files with the image and version.
In the command shown i create a test3 file which combines those two and uses it in the docker-compose as --env-file, afterwards test3 gets deleted again.
Note: test3 does not get deleted if there is a problem starting the docker container.

Docker entrypoint script not sourcing file

I have an entrypoint script with docker which is getting executed. However, it just doesn't run the source command to source a file full of env values.
Here's the relevant section from tehe dockerfile
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["-production"]
I have tried 2 version of entrypoint script. Neither of them are working.
VERSION 1
#!/bin/bash
cat >> /etc/bash.bashrc <<EOF
if [[ -f "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env" ]]
then
echo "${SERVICE_NAME}.env found ..."
set -a
source "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
set +a
fi
EOF
echo "INFO: Starting ${SERVICE_NAME} application, environment:"
exec -a $SERVICE_NAME node .
VERSION 2
ENV_FILE=/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env
if [[] -f "$ENV_FILE" ]; then
echo "INFO: Loading environment variables from file: ${ENV_FILE}"
set -a
source $ENV_FILE
set +a
fi
echo "INFO: Starting ${SERVICE_NAME} application..."
exec -a $SERVICE_NAME node .
Version 2 of above prints to the log that it has found the file however, source command simply isn't loading the contents of file into memory. I check if contents have been loaded by running the env command.
I've been trying few things for 3 days now with no progress. Please can someone help me? Please note I am new to docker which is making things quite difficult.
I think your second version is almost there.
Normally Docker doesn't read or use shell dotfiles at all. This isn't anything particular to Docker, just that you're not running an "interactive" or "login" shell at any point in the sequence. In your first form you write out a .bashrc file but then exec node, and nothing there ever re-reads the dotfile.
You mention in the question that you use the env command to check the environment. If this is via docker exec, that launches a new process inside the container, but it's not a child of the entrypoint script, so any setup that happens there won't be visible to docker exec. This usually isn't a problem.
I can suggest a couple of cleanups that might make it a little easier to see the effects of this. The biggest is to split out the node invocation from the entrypoint script. If you have both an ENTRYPOINT and a CMD then Docker passes the CMD as arguments to the ENTRYPOINT; if you change the entrypoint script to end with exec "$#" then it will run whatever it got passed.
#!/bin/sh
# (trying to avoid bash-specific constructs)
# Read the environment file
ENV_FILE="/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
if [[ -f "$ENV_FILE" ]; then
. $ENV_FILE
fi
# Run the main container command
exec "$#"
And then in the Dockerfile, put the node invocation as the main command
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array syntax
CMD ["node", "."] # could be shell-command syntax
The important thing with this is that it's easy to override the command but leave the entrypoint intact. So if you run
docker run --rm your-image env
that will launch a temporary container, but passing env as the command instead of node .. That will go through the steps in the entrypoint script, including setting up the environment, but then print out the environment and exit immediately. That will let you observe the changes.

Access environment variable value in docker ENTRYPOINT ( exec ) from second parameter(with customerentrypoint script as first parameter)

I want to access the value of one of environment variable in my dockerfile , and pass it as first argument to the main script in docker ENTRYPOINT.
I came across this so link which shows two ways to do it. one with exec form and one with shell form.
The exec form worked fine to echo the environment variable with ["sh", "-c", "echo $VARIABLE"] but when I tried with my custom entrypoint script ENTRYPOINT ["/bin/customentrypoint.sh", "$VARIABLE"] it is not able to get the value for variable, instead its just taking it as constant $VARIABLE.
So I went with shell form approach and just called ENTRYPOINT /bin/customentrypoing "$VARIABLE", and it worked fine to get the value of $VARIABLE but It seems that its restricting the no of command line arguments in this case. as I am getting only one value of $# even after passing other command line arguments from docker run.Can someone please help me if I am doing something wrong , or I should tackle this in different way.Thanks in Advance.
docker looks is similar to
#!/usr/bin/env bash
...
ENV VARIABLE NO
...
RUN echo "#!/bin/bash" > /bin/customentrypoint.sh
RUN echo "if [ "\"\$1\"" = 'YES' ] ; then ; python ${LOCATION}/main.py" \"\$#\" "; else ; echo Please select -e VARIABLE=YES ; fi" >> /bin/customentrypoint.sh
RUN chmod +x /bin/customentrypoint.sh
RUN ln -s -T /bin/customentrypoint.sh /bin/customentrypoint
WORKDIR ${LOCATION}
ENTRYPOINT /bin/customentrypoint "$VARIABLE" # - works fine but limits no of command line arguments
# ENTRYPOINT ["bin/customentrypoint", "$VARIABLE"] # not able to get value of $VARIABLE instead taking as constant.
command I am using
docker run --rm -v $PWD:/mnt -e VARIABLE=VALUE docker_image:tag entrypoint -d /mnt/tmp -i /mnt/input_file
The environment for CMD is interpreted slightly differently depending on how you write the arguments. If you pass the CMD as a string (not inside an array), it gets launched as a shell instead of exec. See https://docs.docker.com/engine/reference/builder/#cmd.
What you can try if you want to use array is
ENTRYPOINT ["/bin/sh", "-c", "echo ${VARIABLE}"]

Access files written in docker volumes from the host

I have a docker container writing logfiles to a name volume.
From the host I want to analyce the logfiles and search for given log messages. But when I access the folder which 'docker inspect VOLUMNAME' gives, I get strange behavior, which I do not understand.
e.g. following command does give empty lines as output:
user#docker-host-01:~/docker-server-env/otaya-designdb$ sudo bash -c "for logfile in /var/lib/docker/volumes/design-db-logs/_data/*/*; do echo ${logfile}; done"
user#docker-host-01:~/docker-server-env/otaya-designdb$
What could be the reason?
Your local shell is expanding the variable expansion inside the double quotes before the loop happens. Change the double quotes to single quotes.
That is, when you run
sudo bash -c "for ... ; do echo ${logfile}; done"
first your local shell replaces the variable reference with whatever your local environment has set for $logfile, probably nothing
sudo bash -c 'for ...; do echo ; done'
and then it runs that command. If you change this to single quotes initially
sudo bash -c 'for ... ; do echo ${logfile}; done'
it will avoid this expansion.
You can see this just by putting the word echo at the front of the command: the shell will do its expansion, and then echo will print out the command that would have run.

How do I run the eval $(envkey-source) command in docker using Dockerfile?

I want to run a command, eval $(envkey-source) for setting certain environment variables using envkey. I install it, set my ENVKEY variable and then try to import all the environment variables. I do this all via Docker. However, docker is giving an error in this command:
Step 31/35 : RUN eval $(envkey-source)
---> Running in 6a9ebf1ede96
/bin/sh: 1: export: : bad variable name
The command '/bin/sh -c eval $(envkey-source)' returned a non-zero code: 2
I tried reading the documentation of envkey but they tell nothing about Docker.
I have installed envkey using following commands:
ENV ENVKEY=yada_yada
RUN curl -s https://raw.githubusercontent.com/envkey/envkey-source/master/install.sh | bash
Until here, all goes well. I get verbose of suggestions on the console about how to run the envkey to get all the environment variables set.
The problem comes on this side:
RUN eval $(envkey-source)
The error:
Step 31/35 : RUN eval $(envkey-source)
---> Running in 6a9ebf1ede96
/bin/sh: 1: export: : bad variable name
The command '/bin/sh -c eval $(envkey-source)' returned a non-zero code: 2
You can't do this, for a couple of reasons. The envkey documentation eventually links to an example in their GitHub which you might find informative.
Each Dockerfile RUN command runs a new shell in a new container. In particular, environment variables set within a RUN command are lost after it exits. Any form of RUN export ... is a no-op. If variables are static you can set them using the ENV directive, but in this case where you're running a program that needs to generate them dynamically, you need another approach.
A typical pattern here is to use a shell script as your container's ENTRYPOINT. That does some initial setup and then replaces itself with the container's CMD. Since the CMD runs in the same shell environment as the rest of the script, you can do dynamic variable setup here. The script might look like:
#!/bin/sh
eval "$(envkey-source)"
exec "$#"
The other thing to keep in mind here is that anyone can docker inspect your image and get its environment variables back out, or docker run imagename /usr/bin/env. If you could run envkey-source in the Dockerfile then the environment variables would be available in the image in clear text, which defeats the purpose. Even embedding the key in the image effectively leaks it. You should pass this at runtime using a docker run -e option or a Docker Compose environment: key, relaying it from the host's environment.

Resources