I am attempting to provide a custom prompt to a docker based VSCode development environment, the same prompt that works on my host machine. However, the value is becoming encoded causing all the \ characters to be doubled. Hence,
ENV PS1=\[\e]0;\u#\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u#\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$
becomes
\[\]\[\e]0;\u#\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u#\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ \[\]
How do I provide escape characters
Related
I'm building an image that is based on an ubuntu image with systemd. I need to start
TigerVNC as a service which depends on some environment variables that I have defined in my
Dockerfile, like the password.
FROM ubuntu-systemd
ENV VNC_PW="some-password"
ENTRYPOINT ["/lib/systemd/systemd"]
The unit file for this service has a line that is:
ExecStart=/usr/sbin/runuser -l root -c "/some/script.sh"
Since systemd has its own environment, I don't have access to the environment variables
defined in my Dockerfile. I was expecting that running the script as root with
a login shell (the '-l' flag) would give me access to these variables but it does not.
I know the variables I need are in /proc/1/environ but I don't know how to load them, for
example adding something in the .profile file for root.
Thank you.
A little bit tricky but I can load the environment variables in the /some/script.sh in the ExecStart property by including the following line:
export `xargs --null --max-args=1 echo < /proc/1/environ`
Currently my run command looks like this...
docker run -e DB_URL=$DB_URL -e DB_PORT=$DB_PORT ... <image name>
This works but not very scalable. Is there a way to just pass all configured env vars to the container without declaring each one?
I am using OSX and these are set in a .bash_profile.
env > envFile && docker run --env-file=envFile alpine env
However I would not recommend doing this as this will pass even un-necessary info to docker container.
And you should rather use a compose file or maybe even a simple script to only pass in variables that are actually needed.
This might even mess with the shell inside of a container for things like prompts and locales etc.
Ultimately, the best option is to explicitly list the vars you want to pass. There should only be a finite number of them, they shouldnt change often, and this solution is more robust and more secure.
Note that you can simplify your example. If you want to pass in a defined var with its current value, just name the var instead of setting it:
docker run -e DB_URL -e DB_PORT ... <image name>
I put the docker in swarm mode and did the following
echo "'admin'" | docker secret create password -
docker service create \
--network="host" \
--secret source=password,target=password \
-e PASSWORD='/run/secrets/password' \
<image>
I was not able to pass the password secret created via environment variable through docker service.
Please help me out where I am going wrong.
You are misunderstanding the concept of docker secrets.
The whole point of creating secrets is avoiding putting sensitive information into environment variables.
In your example the PASSWORD environment variable will simply carry the value /run/secrets/password which is a file name and not the password admin.
A valid usacase of docker secrets would be, that your docker-image reads the password from that file.
Checkout the docs here especially the example about MySQL:
the environment variables MYSQL_PASSWORD_FILE and MYSQL_ROOT_PASSWORD_FILE to point to the files /run/secrets/mysql_password and /run/secrets/mysql_root_password. The mysql image reads the password strings from those files when initializing the system database for the first time.
In short: your docker image should read the content of the file /run/secrets/password
There is no standard here.
Docker docs discourages using environment variables, but there is confusion whether it is setting password directly as string in "environment" section or other usage of environment variables within container.
Also using string instead of secret when same value might be used in multiple services requires checking and changing it in multiple places instead of one secret value.
Some images, like mariadb, is using env variables with _FILE suffix to populate suffixless version of variable with secret file contents. This seems to be ok.
Using Docker should not require to redesign application architecture only to support secrets in files. Most of other orchestration tools, like Kubernetes, supports putting secrets into env variables directly. Nowadays it is rather not considered as bad practice. Docker Swarm simply lacks good pracitces and proper examples for passing secret to env variable.
IMHO best way is to use entrypoint as a "decorator" to prepare environment from secrets.
Proper entrypoint script can be written as almost universal way of processing secrets, because we can pass original image entrypoint as argument to our new entrypoint script so original image "decorator" is doing it's own work after we prepare container with our script.
Personally I am using following entrypoint with images containing /bin/sh:
https://github.com/DevilaN/docker-entrypoint-example
I am trying to use one Dockerfile for both my production and development. The only difference between the production and development are the environment variables I set. Therefore I would like someway import the environment variables from a file. Before using Docker I would simply do the following
. ./setvars
./main.py
However if change ./main.py with the Docker equivalent
. ./setvars
docker run .... ./main.py
then the variables will be on the host and not accessible from the Docker instance. Of course a quick and dirty hack would be to make a file with
#!/bin/bash
. ./setvars
./main.py
and run that in the instance. That would however be really annoying, since I got lots of scripts I would like to run (with the same environment variables), and would then have to create a extra script for everyone of those.
Are there any other solution to get my environment variables inside docker without using a different Dockerfile and the method I described above?
Your best options is to use either the -e flag, or the --env-file of the docker run command.
The -e flag allows you to specify key/value pairs of env variable,
for example:
docker run -e ENVIRONMENT=PROD
You can use several time the -e flag to define multiple env
variables. For example, the docker registry itself is configurable
with -e flags, see:
https://docs.docker.com/registry/deploying/#running-a-domain-registry
The --env-file allow you to specify a file. But each line of the file
must be of type VAR=VAL
Full documentation:
https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables-e-env-env-file
I've created a docker image for my database server and one for the web application. Using the documentation I'm able to link between both container using the environment variables as follow:
value="jdbc:postgresql://${DB_PORT_5432_TCP_ADDR}:${DB_PORT_5432_TCP_PORT}/db_name"
It works fine now but it would be better that the environment variables are more general and without containing a static port number. Something like:
value="jdbc:postgresql://${DB_URL}:${DB_PORT}/db_name"
Is there anyway to link between the environment variables? for example by using the ENV command in the dockerfile ENV DB_URL=$DB_PORT_5432_TCP_ADDR or by using the argument --env by running the image docker run ... -e DB_URL=$DB_PORT_5432_TCP_ADDR docker_image ?
Without building this kind of functionality into your docker startup shell scripts or other orchestration mechanism, this is not possible at the moment to create environment variables like you are describing here. You do mention a couple of workarounds. However, the problem at least with using the -e DB_URL=... in your docker run command is that your $DB_PORT_5432_TCP_ADDR environment variable is not known at runtime, and so you will not be able to set this value when you run it. Typically, this is what your orchestration layer is used for, service discovery and passing this kind of data among your containers. There is at least one workaround mentioned here on SO that involves constructing a special shell script that you put in your CMD or ENTRYPOINT directives that passes the environment variable to the container.