Trying to concatenate a value to an existing environment variable in a docker container I'm starting.
for example - docker run -it -e PATH=$PATH:foo continuumio/anaconda
I am currently stuck at the point of trying to concatenate a value to the existing PATH environment variable that already exists in the container.
I am expecting to see the following value in the PATH environment variable of the container - PATH=/opt/conda/bin:/usr/lib/jvm/java-8-openjdk-amd64/bin:/usr/local/scala/bin:/usr/local/sbt/bin:/usr/local/spark/bin:/usr/local/spark/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Instead I get this - PATH=$PATH:foo
using the docker run command. Is there anyway to achieve what I'm aiming at?
--EDIT--
I am executing the command from a windows 10 command line window.
Try the following:
docker run -it continuumio/anaconda /bin/bash -c "PATH=$PATH:foo exec bash"
This command launches bash in the container, passes it a command (-c) that appends to the existing $PATH and then replaces itself with a new bash copy (exec bash) that inherits the new $PATH value.
If you also want to execute a command in the updated shell, you can pass another -c option to exec bash, but note that quoting can get tricky, and that a trick is needed to keep a shell open:
docker run -it continuumio/anaconda /bin/bash -c "PATH=$PATH:foo exec bash -c 'date; exec bash'"
The small caveat is that the shell that is running when the startup command has finished is not the same instance as the one that ran the command (which shouldn't be a problem, unless your startup command made modifications to the shell state (such as defining functions, aliases, ...) that must be preserved).
As for what you tried:
The only way to set an environment variable with -e is if the value is known ahead of time, outside the container; whatever you pass to -e must be a literal value - it cannot reference definitions inside the container.
As an aside: If you ran your command on a Unix platform rather than Windows, the current shell would expand $PATH, which is also not the intent.
Related
I have a script startScript.sh that I want to run after the docker container completely starts (including loading all services that the container is supposed to start)
After that I want to run a script: startScript.sh
This is what I do:
sudo docker run -p 8080:8080 <docker image name> " /bin/bash -c ./startScript.sh"
However this gives me an error:
WFLYSRV0073: Invalid option '/bin/bash'
Even tried different shells still same error. Even tried passing just the script file name. Still did not help.
Note: I know that the above file is in the container in the root folder: /
In fact I once entered the container by doing: sudo docker exec and manually ran that script file and it worked.
But when I try to automatically do it as above, it does not work for me.
Some questions:
1. Please suggest what could be the issue.
2. I want to run that script after the container has started completely and is up and running - including all the services that are part of it. Is this the right way to even do it? Or does this try to run while the container is starting up?
When you pass arguments after the image name, you are not modifying the entrypoint, but the command (CMD). It seems your image has WFLYSRV0073 as entrypoint, which makes the actual executed binary be your entrypoint, with your command as arguments. Which makes WFLYSRV0073 fail when trying to parse /bin/bash as an argument.
To run just your script, you could override the image's entrypoint with an empty string, making it run your command's first element. Notice I also remove the quotes, or else Docker will search for a binary with the name containing spaces, which of course doesn't exist.
sudo docker run --entrypoint "" -p 8080:8080 <docker image name> /bin/bash -c ./startScript.sh
However this is probably not what you want: it won't run what the image should actually be running, only your setup script. The correct thing to do here is to modify the image's Dockerfile to run the setup script as the entrypoint, and at the end of it run the script's current entrypoint (the actual thing you want to run).
Alternatively, if you do not control the image you are running, you can use FROM <the current image> in a new Dockerfile to build another image based on it, setting the entrypoint to your script.
Edit:
An example of how the above can be done can be seen in MariaDB's entrypoint: you first start a temporary server, run your setup, then restart it, running the definitive service (which is the CMD) at the end.
The above solutions are good in case you want to perform initialization for an image, but if you just want to be able to run a script for development reasons instead of doing it consistently on the image's entrypoint, you can copy it to your container and then use docker exec <container name> your-command and-arguments to run it.
I've created a docker image with all the modules required for our build environment. If I start a container in interactive mode, I can build fine.
docker run -v <host:container> -w my_working_dir -it my_image
$make -j16
But if I try to do this from a command line I get compile failures (well into the process)
docker run -v <host:container> -w my_working_dir my_image bash -c "make -j16"
Also if I run the container detached and use docker exec I also get compile failures (same point)
docker run -v <host:container> -t --detach --name star_trek my_image
docker exec star_trek bash -c "cd my_working_dir; make -j16"
Entering an interactive session with the detached container also seems seems to pass though I though I have seen this fail as well.
docker exec -it star_trek_d bash
$make -j16
This will be part of an automated build system so I need to be able run this without user intervention.
I'm not sure why these are behaving differently but I ran multiple combination and the only way I've been able to get a success build is through the interactive method above. Other then the interactive system having more of a logged in user configuration, what is the difference between running interactive or passing on command line?
My preferred method would to be run the container detached so I can send several sequential commands as we have a complex build and test process but if I have to spin the container up each time I'm OK with that as this point because I really need to get this running like last week.
*Commands are pseudo-code and simplified to aid visibility and using bash -c because I'm needing to run a script for our test and therefore doing something like bash -c "my_script.sh; run_test"
UPDATE - We need custom paths for our build tools. I believe this is not working except in the interactive session. Our /etc/bashrc file is used to build the correct path and export it. When I do a docker run I've tried running a script that does a "source /etc/bashrc", among other initialization things we need, before doing the make but this doesn't seem to work. Note have to pipe in password as this needs to be run using sudo. The other commands seem to work fine.
bash -c 'echo su_password | sudo -S /tmp/startup.sh; make -j16'
I've also tried to just set on command without success
bash -c 'export <path>; make -j16'
What is the best way to set the path in the container so installed applications can be found? I don't want to hard code them in the dockerfile but will at this point if I must.
I have this working. As our path is very long I set it to a variable and was passing it in on the command line. Seems this was causing issues.
export PATH=$PATH/...
vs
export PATH=$PATH:/...
Now I am just specifying the whole path each time and everything is working.
bash -c 'export PATH=$PATH/<dir>/<program>/bin:/<dir>/<program>/bin:...; make -j16'
Below is the command I am trying to run:
docker exec sandbox_1 'influxd-ctl sandbox_1:8091'
I understand that apparently this means the container will execute it with a different shell that does have the necessary $PATH but I'm not sure how to deal with that.
For what it's worth, I tried influxd-ctl without the single quotes and it didn't read the rest of the command.
docker exec sandbox_1 influxd-ctl sandbox_1:8091
Thoughts?
Update: I also tried running bash -c <string> as the command I passed to exec but that didn't seem to work either.
Single quotes shouldn't be used. The Exec Command takes the command and it's arguments as separate arguments.
The correct command in your case should be:
docker exec <container> influxd-ctl <container>:8091
You can also test the command when having a shell inside the container like this:
docker exec -it <container> bash
You should then (provided bash is available inside the container, otherwise other shells can be used instead) get a root shell like this:
root#<container>:~#
Note: The working dir might be different based on where it was set in the Dockerfile used to build the image of the container.
In the now interactive shell talking to the container, you can try your command explicitly without the Exec command passing stuff around.
root#<container>:~# influxd-ctl <container>:8091
If you find that your command doesn't work there, then probably the influxd-ctl command expects different parameters from what you are suggesting.
In a Dockerfile, RUN instruction has two forms shell and exec:
# shell form
RUN <command>
# exec form
RUN ["executable", "param1", "param2"]
When shell form is used the <command> is run inside a shell, prepending to it a proper shell (i.e.: sh -c "<command>").
So far so good, the question is: how exec form work? How commands are executed without shell? I haven't found a satisfying answer reading official doc.
The exec form of the command runs your command with the same OS syscall that Docker would use to run the shell itself. It's just doing the namespaced version of a fork/exec that linux uses to run any process. The shell itself is a convenience that provides PATH handling, variable expansion, IO redirection, and other scripting features, but these aren't required to run processes at the OS level. This question may help you understand how Linux runs processes.
This looks like a docker file.
With the RUN syntax, the commands will run one at a time in the para virtualised environment and the default shell for the given environment (usually Bash) will be spawned for each command. sh -c is "short-hand" for bash and so you are in effect doing the same thing.
In shell form, the command will run inside a shell with /bin/sh –c
RUN apt-get update
Exec format allows execution of command in images that don’t have /bin/sh
RUN [“apt-get”, “update”]
shell form is easier to write and you can perform shell parsing of variables
• For example
CMD sudo -u $(USER} java ....
• Exec form does not require image to have a shell.
My current setup for running a docker container is on the lines of this:
I've got a main.env file:
# Main
export PRIVATE_IP=\`echo localhost\`
export MONGODB_HOST="$PRIVATE_IP"
export MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
In my service file (upstart), I source this file . /path/to/main.env
I then call docker run with multiple -e for each of the environment variables I want inside of the container. In this case I would call something like: docker run -e MONGODB_URL=$MONGODB_URL ubuntu bash
I would then expect MONGODB_URL inside of the container to equal mongodb://localhost:27017/development. Notice that in reality echo localhost is replaced by a curl to amazon's api for an actual PRIVATE_IP.
This becomes a bit unwieldy when you start having more and more environment variables you need to give your container. There is a fine point to see here which is that the environment variables need to be resolved at run time, such as with a call to curl or by referring to other env variables.
The solution I was hoping to use is:
calling docker run with an --env-file parameter such as this:
# Main
PRIVATE_IP=\`echo localhost\`
MONGODB_HOST="$PRIVATE_IP"
MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
Then my docker run command would be significantly shortened to docker run --env-file=/path/to/main.env ubuntu bash (keep in mind usually I've got around 12-15 environment variables.
This is where I hit my problem which is that inside the container none of the variables resolve as expected. Instead I end up with:
PRIVATE_IP=`echo localhost`
MONGODB_HOST="$PRIVATE_IP"
MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
I could circumvent this by doing the following:
Sourcing the main.env file.
Creating a file containing just the names of the variables I want (meaning docker would search for them in the environment).
Then calling docker run with this file as an argument to --env-file. This would work but would mean I would need to maintain two files instead of one, and really wouldn't be that big of an improvement of the current situation.
What I would prefer is to have the variables resolve as expected.
The closest question to mine that I could find is:
12factor config approach with Docker
Ceate a .env file
example: test=123 val=Guru
Execute command
docker run -it --env-file=.env bash
Inside the bash verify using
echo $test (should print 123)
Both --env and --env-file setup variables as is and do not replace nested variables.
Solomon Hykes talks about configuring containers at run time and the the various approaches. The one that should work for you is to volume mounting the main.env from host into the container and sourcing it.
So I just faced this issue as well, what solved it for me was I specified the --env-file or -e KEY=VAL before the name of the container image. For example
Broken:
docker run my-image --env-file .env
Fixed:
docker run --env-file .env my-image
creating an ENV file that is nothing more than key/value pairs can be processed in normal shell commands and appended to the environment. Look at the bash -a pragma.
What you can do is create a startup script that can be run when the container starts. So if your current docker file looks something like this
From ...
...
CMD command
Change it to
From ...
...
ADD start.sh start.sh
CMD ["start.sh"]
In your start.sh script do the following:
export PRIVATE_IP=\`echo localhost\`
export MONGODB_HOST="$PRIVATE_IP"
export MONGODB_URL="mongodb://$MONGODB_HOST:27017/development"
command
I had a very similar problem to this. If I passed the contents of the env file to docker as separate -e directives then everything ran fine however if I passed the file using --env-file the container failed to run properly.
Turns out there were some spurious line endings in the file (I had copied from windows and ran docker in Ubuntu). When I removed them the container ran the same with --env or --env-file.
I had this issue when using docker run in a separate run script run.sh file, since I wanted the credentials ADMIN_USER and ADMIN_PASSWORD to be accessible in the container, but not show up in the command.
Following the other answers and passing a separate environment file with --env or --env-file didn't work for my image (though it worked for the Bash image). What worked was creating a separate env file...
# env.list
ADMIN_USER='username'
ADMIN_PASSWORD='password'
...and sourcing it in the run script when launching the container:
# run.sh
source env.list
docker run -d \
-e ADMIN_USER=$INFLUXDB_ADMIN_USER \
-e ADMIN_PASSWORD=$INFLUXDB_ADMIN_PASSWORD \
image_repo/name:tag