I've been banging my head against a wall for a couple hours trying to figure this out.
My node.js code uses an environment variable GAC to authenticate with google cloud.
let { GAC } = process.env
const firestore = new Firestore({ keyFilename: GAC })
GAC is the path to the json file that contains secret keys, ids, etc.
On my local machine, printenv displays:
GAC="${HOME}/project-id.json"
And obviously the file itself is located where it should be.
Now I want to run a docker container, passing it the GAC variable and the json file, so that my code can run as normal.
$ docker run \
-e GAC=$GAC \
-v $GAC:$GAC \
-t image \
printenv
I want to pass the unexpanded GAC variable to docker. So printenv inside the container should show the exact same thing as printenv on my local machine.
GAC="${HOME}/project-id.json"
However the $GAC variable is expanded by the shell before the command is run. So printenv inside the container ends up looking like this
GAC="C:\Users\eric\project-id.json"
which is incorrect.
How do I do this properly? I want my code to retrieve the GAC env variable, no matter if its being run on my local machine or in a container.
Related
I want to run a command, eval $(envkey-source) for setting certain environment variables using envkey. I install it, set my ENVKEY variable and then try to import all the environment variables. I do this all via Docker. However, docker is giving an error in this command:
Step 31/35 : RUN eval $(envkey-source)
---> Running in 6a9ebf1ede96
/bin/sh: 1: export: : bad variable name
The command '/bin/sh -c eval $(envkey-source)' returned a non-zero code: 2
I tried reading the documentation of envkey but they tell nothing about Docker.
I have installed envkey using following commands:
ENV ENVKEY=yada_yada
RUN curl -s https://raw.githubusercontent.com/envkey/envkey-source/master/install.sh | bash
Until here, all goes well. I get verbose of suggestions on the console about how to run the envkey to get all the environment variables set.
The problem comes on this side:
RUN eval $(envkey-source)
The error:
Step 31/35 : RUN eval $(envkey-source)
---> Running in 6a9ebf1ede96
/bin/sh: 1: export: : bad variable name
The command '/bin/sh -c eval $(envkey-source)' returned a non-zero code: 2
You can't do this, for a couple of reasons. The envkey documentation eventually links to an example in their GitHub which you might find informative.
Each Dockerfile RUN command runs a new shell in a new container. In particular, environment variables set within a RUN command are lost after it exits. Any form of RUN export ... is a no-op. If variables are static you can set them using the ENV directive, but in this case where you're running a program that needs to generate them dynamically, you need another approach.
A typical pattern here is to use a shell script as your container's ENTRYPOINT. That does some initial setup and then replaces itself with the container's CMD. Since the CMD runs in the same shell environment as the rest of the script, you can do dynamic variable setup here. The script might look like:
#!/bin/sh
eval "$(envkey-source)"
exec "$#"
The other thing to keep in mind here is that anyone can docker inspect your image and get its environment variables back out, or docker run imagename /usr/bin/env. If you could run envkey-source in the Dockerfile then the environment variables would be available in the image in clear text, which defeats the purpose. Even embedding the key in the image effectively leaks it. You should pass this at runtime using a docker run -e option or a Docker Compose environment: key, relaying it from the host's environment.
I have looked around online and tried the obvious route (explained below) to remove an environmental variable from a docker image.
1 - I create a container from a modified ubuntu image using:
docker run -it --name my_container my_image
2 - I inspect the image and see the two environmental variables that I want to remove using:
docker inspect my_container
which yields:
...
"Env": [
"env_variable_1=abcdef",
"env_variable_2=ghijkl",
"env_variable_3=mnopqr",
...
3 - I exec into the container and remove the environmental variables via:
docker exec -it my_container bash
unset env_variable_1
unset env_variable_2
4 - I check to make sure the specified variables are gone:
docker inspect my_container
which yields:
...
"Env": [
"env_variable_3=mnopqr",
...
5 - I then commit this modified container as an image via:
docker commit my_container my_new_image
6 - And check for the presence of the deleted environmental variables via:
docker run -it --name my_new_container my_new_image
docker inspect my_new_container
which yields (drumroll please):
...
"Env": [
"env_variable_1=abcdef",
"env_variable_2=ghijkl",
"env_variable_3=mnopqr",
...
AKA the deleted variables are not carried through from the modified container to the new image in the docker commit
What am I missing out on here? Is unset really deleting the variables? Should I use another method to remove these environmental variables or another/modified method to commit the container as an image?
PS: I've confirmed the variables first exist when inside the container via env. I then confirmed they were not active using the same method after using unset my_variable
Thanks for your help!
You need to edit the Dockerfile that built the original image. The Dockerfile ENV directive has a couple of different syntaxes to set variables but none to unset them. docker run -e and the Docker Compose environment: setting can't do this either. This is not an especially common use case.
Depending on what you need, it may be enough to set the variables to an empty value, though this is technically different.
FROM my_image
ENV env_variable_1=""
RUN test -z "$env_variable_1" && echo variable 1 is empty
RUN echo variable 1 is ${env_variable_1:-empty}
RUN echo variable 1 is ${env_variable_1-unset}
# on first build will print out "empty", "empty", and nothing
The big hammer is to use an entrypoint script to unset the variable. The script would look like:
#!/bin/sh
unset env_variable_1 env_variable_2
exec "$#"
It would be paired with a Dockerfile like:
FROM my_image
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["same", "as", "before"]
docker inspect would still show the variable as set (because it is in the container metadata) but something like ps e that shows the container process's actual environment will show it unset.
As a general rule you should always use the docker build system to create an image, and never use docker commit. ("A modified Ubuntu image" isn't actually a reproducible recipe for debugging things or asking for help, or for rebuilding it when a critical security patch appears in six months.) docker inspect isn't intrinsically harmful but has an awful lot of useless information; I rarely have reason to use it.
Maybe you can try with this way, as in this answer:
docker exec -it -e env_variable_1 my_container bash
And then commit the container as usual.
I personally was looking to remove all environment variables to have a fresh image but without losing the contents inside the image.
The problem was that when i reused this image and reset those environment variables with new values, they were not changed, the old values were still present.
My solution was to reinitialize the image with docker export and then docker import.
Export
First, spin up a container with the image, then export the container to a tarball
docker export {container_name} > my_image.tar
Import
Import the tarball to a new image
docker import my_image.tar my_image_tag:latest
Doing this will reset the image, meaning only the contents of the container will remain.
All layers, environment variables, entrypoint, and command data will be gone.
I've been trying to import a JSON file of environment variables to a newly created Cloud Composer instance using the airflow CLI but when running the below I get the error: Missing variables file.
gcloud composer environments run ${COMPOSER_NAME} \
--location=${COMPOSER_LOCATION} \
variables -- \
-i ${VARIABLES_JSON}
From looking at the source it seems that this happens when an environment variable file doesn't exist at the expected location. Is this because Cloud Composer sets up its variables in a different location so this CLI won't work? I've noticed that there's a env_var.json file that's created on the instance's GCS bucket, I realise I can overwrite this file but that doesn't seem like best practice.
It feels like a hack but I copied over the variables.json to my Composer's GCS bucket data folder and then it worked.
This is due to os.path.exists() checking the container that Airflow is running on. I chose this approach over overwriting env_var.json because I get the variables in Airflow's UI with this method.
Script for anyone interested:
COMPOSER_DATA_FOLDER=/home/airflow/gcs/data
COMPOSER_GCS_BUCKET=$(gcloud composer environments describe ${COMPOSER_NAME} --location ${COMPOSER_LOCATION} | grep 'dagGcsPrefix' | grep -Eo "\S+/")
gsutil cp ${ENV_VARIABLES_JSON_FILE} ${COMPOSER_GCS_BUCKET}data
gcloud composer environments run ${COMPOSER_NAME} \
--location ${COMPOSER_LOCATION} variables -- \
-i ${COMPOSER_DATA_FOLDER}/variables.json
Is there a way we can pass a variable lets say in this example I want to pass a list of animals into an entrypoint.sh file using ENV animals="turtle, monkey, goose"
But I want to be able to pass different animals when running the container for example docker run -t image animals="mouse,rat,kangaroo"
How do you go about passing arguments when running the docker run command?
The goal is to take that variable when using the docker run command and insert them into that entrypoint.sh file
Right now i hard code that in my Dockerfile. But i want to be able to do this when running the docker run command so I dont always have to change the Dockerfile.
FROM anapsix/alpine-java:8u121b13_jdk
ENV FILE_NAME="file_to_run.zip"
ENV animals="turtle, monkey, goose"
ADD ${FILE_NAME} .
RUN echo "${FILENAME} ${animals}" > ./entrypoint.sh
CMD [ "/bin/ash", "./entrypoint.sh" ]
It looks like you might be confusing the image build with the container run. If the difference between the two isn't immediately clear, I'd recommend reviewing some other questions and docs like:
In Docker, what's the difference between a container and an image?
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
RUN echo "${FILENAME} ${animals}" > ./entrypoint.sh
With the above, the variables will be expanded during the image build. The entrypoint.sh will not contain ${FILENAME} ${animals}. Instead, it will contain
file_to_run.zip turtle, monkey, goose
After the build, the docker run command will create a container from that image and run the above script with the environment variables defined but never used since the script already has the variables expanded. To prevent the variable expansion, you need to escape the $ or use single quotes to prevent the expansion, e.g.
RUN echo "\${FILENAME} \${animals}" > ./entrypoint.sh
or
RUN echo '${FILENAME} ${animals}' > ./entrypoint.sh
I would also recommend being explicit with a #!/bin/ash at the top of this script. Then when you run the script, do not override the command with parameters after the image name. Instead set the environment variables with the appropriate flag to run:
docker run -it -e animals="mouse,rat,kangaroo" image
Simplest way, forward individual variables:
docker run ... --env animals="turtle, monkey, goose" --env FILE_NAME="file_to_run.zip"
Forward several variables using file:
Or if you need to grab all your environment variables from outside, you can do something like this first:
printenv | grep -E 'animals|FILE_NAME' > my-env
The grep is because Docker doesn't like some variables, e.g. with spaces in them, which you might possibly have in your real environment.
Then use that file in your Docker command:
docker run ... --env-file ./my-env
The latter is also useful if you want to avoid sending environment variables to logs (like for sensitive variables). I use this approach in a CI/CD pipeline that runs some scripts.
Using variables inside Docker:
With either approach, the environment variables actually become available to scripts running inside the container to use.
#BMitch's answer has more complete details about how to achieve this in your case, where you have related logic in both build and execution.
Reference
See docs here.
In their official site (https://docs.docker.com/reference/builder/#env), Docker support state that:
The ENV instruction sets the environment variable to the value
. This value will be passed to all future RUN instructions.
This is functionally equivalent to prefixing the command with < key >=< value >
I tried:
http_proxy=<PROXY> docker build .
However, this doesn't seem to bring the same effect as adding ENV http_proxy=< PROXY > inside the Dockerfile. Why ???
This is functionally equivalent to prefixing the command with < key >=< value >
This does not mean it is the same as prefixing docker build command since It is command executed outside of a container.
It means using ENV is the same as prefixing commands that run inside container.
For example, equivalent RUN statement would look like this:
RUN http_proxy=<PROXY> curl https://www.google.com
Or equivalent command executed inside container (via shell):
$ http_proxy=<PROXY> curl https://www.google.com