I have the Docker configuration task and I need to execute in docker container a bash script which uses Bamboo variables.
Is there a way to pass all Bamboo variables to docker container?
I have lots of Bamboo plans with quite a few different variables in them so putting all variables in container environment variables is not an option.
Of course, I can dump them into file in one task and parse the variables from file in docker task, but I was hoping to find an easier solution.
Thanks!
What version of Bamboo are you using?
There are a problem with bamboo variables is docker containers in some versions of Bamboo, but was fixed in Bamboo 6.1.0:
Unable to use variables in Container name field in Run docker task
Workaround:
Create a Script Task that runs before the Docker Task.
Run commands like
echo "export sourcepath=$ini_source_path" > scriptname.sh
chmod +x scriptname.sh
The Docker Task will be map the ${bamboo.working.directory} to the Docker \data volume.
So the just created scriptname.sh script is available in the Docker container.The script will be executed, and will set the variable correctly.
More info in this post:
How to send bamboo variables from Bamboo script to docker container?
I think the only way is what you mention already - dump the env variables to file in dedicated task earlier in the flow. Say, you do it like so:
#!/bin/bash
env | grep ^bamboo_ > my_env_file
Note the strict regex preventing dumping variables such as PATH.
Then in the docker task, in "Additional arguments" add the following
--env-file=my_env_file
You can use the env_file option using compose:
web:
env_file:
- your-variables.env
or use docker run --env-file=your-variables.env ....
The .env file is a simple key value text file:
# my env file
BAMBOO_ENV=development
Related
Is it possible for a dockerfile / docker compose file to access the host env vars at build time when running docker-compose build app or do you have to manually pass them into the build command?
Environment variables from the host that you want to use at build time would need to be passed into the build command with the --build-arg flag described here.
I'm working on a dotnet core docker container (not aspnet), I'd like to specify configuration options for it through appsettings.json. These values will eventually be filled in through environment variables in kubernetes.
However, for local development, how do we easily pass in these settings without storing them in the container?
You can map local volumes to docker -v local_path:container_path.
If you gonna use kubernetes you can use ConfigMap as well.
You can pass env variables while running the container with -e flag of the command docker run.
With this method, you’ll have to pass each variable in the command line. For example, docker run -e VAR1=value1 -e VAR2=value2
If this gets cumbersome, you can write these values to an env file and use this file like so, docker run --env-file=filename
For reference, you can check out the official docs.
I am trying to set up our production environment in a docker image.
After spending some hours compiling software, I realized that I forgot to set the locale environment variables in the Dockerfile.
Is there a way to permanently commit environment variables to an image?
I only came across the dockerfile way of doing this and I don't want to rebuild from that and lose all the work already done.
Setting those variables in .bashrc is not working, as the docker run command seems to bypass those settings.
Is there a way to permanently commit environment variables to an image?
That is the directive ENV in Dockerfile.
ENV <key> <value>
ENV <key>=<value> ...
But since you don't want to rebuild the image (although you could add it at the end of the dockerfile, and benefit from the cache for most of the image build), you can still launch your containers with docker run -e "variable=value"
You can change environment variables during a "docker commit" operation to modify an existing image. See https://docs.docker.com/engine/reference/commandline/commit/ and this StackOverflow Q&A: Docker Commit Created Images and ENTRYPOINT - and reference the answer by sxc731. It is not clear to me how one sets multiple Env variables, but it might take a separate --change for each.
Here is an example of what I was just doing (bash shell):
docker run -it --entrypoint /bin/bash $origimghash
# make changes, leave running, then in another shell:
EP='ENTRYPOINT ["python","/tmp/deploy/deploy.py"]'
ENV='Env no_proxy "*.local, 169.254/16"'
docker commit "--change=$ENV" "--change=$EP" $runninghash me/myimg
docker inspect -f "{{ .Config.Env }}" $newhash
I've created a docker image for my database server and one for the web application. Using the documentation I'm able to link between both container using the environment variables as follow:
value="jdbc:postgresql://${DB_PORT_5432_TCP_ADDR}:${DB_PORT_5432_TCP_PORT}/db_name"
It works fine now but it would be better that the environment variables are more general and without containing a static port number. Something like:
value="jdbc:postgresql://${DB_URL}:${DB_PORT}/db_name"
Is there anyway to link between the environment variables? for example by using the ENV command in the dockerfile ENV DB_URL=$DB_PORT_5432_TCP_ADDR or by using the argument --env by running the image docker run ... -e DB_URL=$DB_PORT_5432_TCP_ADDR docker_image ?
Without building this kind of functionality into your docker startup shell scripts or other orchestration mechanism, this is not possible at the moment to create environment variables like you are describing here. You do mention a couple of workarounds. However, the problem at least with using the -e DB_URL=... in your docker run command is that your $DB_PORT_5432_TCP_ADDR environment variable is not known at runtime, and so you will not be able to set this value when you run it. Typically, this is what your orchestration layer is used for, service discovery and passing this kind of data among your containers. There is at least one workaround mentioned here on SO that involves constructing a special shell script that you put in your CMD or ENTRYPOINT directives that passes the environment variable to the container.
I'm not sure that I'm trying to do it the right way, but I would like to use docker.io as a way to package some programs that need to be run from the host.
However these applications take filenames as arguments and need to have at least read access. Some other applications generate files as output and the user expects to retrieve those files.
What is the docker way of dealing with files as program parameters?
Start Docker with a mounted volume and use this to directory to manipulate files.
See: https://docs.docker.com/engine/tutorials/dockervolumes/
If you have apps that require args when they're run then you can just inject your parameters as environment variables when you run your docker container
e.g.
docker run -e ENV_TO_INJECT=my_value .....
Then in your entrypoint (or cmd) make sure you just run a shell script
e.g. (in Dockerfile)
CMD["/my/path/to/run.sh"]
Then in your run.sh file that gets run at container launch you can just access the environment variables
e.g.
./runmything.sh $ENV_TO_INJECT
Would that work for you?