As far as I understand Google Cloud Run sets a $PORT environment variable by itself that my application should try to run on.
Let's say my application wants to start on $PORT2.
Can I define on the Google Cloud Run Environment Variables page (or elsewhere) that $PORT2 envvar should take the value of $PORT?
Obviously the other solution would be to change my application start on PORT, am just curious if this is possible?
Thanks
You can pretty much only achieve it by changing your container's entrypoint to a program (e.g. env) that re-adjusts the environment variables to your program:
ENTRYPOINT ["/bin/sh", "-c", "env PORT2=$PORT ./your-app"]
Try it:
docker run --rm -e PORT=8080 busybox /bin/sh -c 'env PORT2=$PORT env'
+1
Example of use case: one docker image that has two targets used by two containers (an API server and a worker)
Related
I have a running docker container using an ancestor my_base_image. Now when the container is running, can I set an environment variable using export command with docker exec? if yes, how?
I tried using the following, but doesn't work
docker exec -i -t $(docker ps -q --filter ancestor=`my_base_image`) bash -c "export my_env_var=hey"
Basically I want to set my_env_var=hey as env variable inside docker container. I know this can be done in may ways using .env_file or env key docker-compose & ENV in Dockerfile. But I just want to know if it is possible using docker exec command
This is impossible. A process can never change the environment of any other process beyond itself, except that it can specify the initial environment of processes it starts itself. In this case, your docker exec shell isn’t launching the main container process, so it can’t change that process’s environment variables.
This is one of a number of changes that you will need to stop, delete, and recreate the container for. You should treat this as extremely routine container maintenance and plan to delete the container eventually. That means, for example, keeping any data that needs to be persisted outside the container, ideally in an external database but possibly in a mounted volume.
I'm working on a dotnet core docker container (not aspnet), I'd like to specify configuration options for it through appsettings.json. These values will eventually be filled in through environment variables in kubernetes.
However, for local development, how do we easily pass in these settings without storing them in the container?
You can map local volumes to docker -v local_path:container_path.
If you gonna use kubernetes you can use ConfigMap as well.
You can pass env variables while running the container with -e flag of the command docker run.
With this method, you’ll have to pass each variable in the command line. For example, docker run -e VAR1=value1 -e VAR2=value2
If this gets cumbersome, you can write these values to an env file and use this file like so, docker run --env-file=filename
For reference, you can check out the official docs.
I am trying to use one Dockerfile for both my production and development. The only difference between the production and development are the environment variables I set. Therefore I would like someway import the environment variables from a file. Before using Docker I would simply do the following
. ./setvars
./main.py
However if change ./main.py with the Docker equivalent
. ./setvars
docker run .... ./main.py
then the variables will be on the host and not accessible from the Docker instance. Of course a quick and dirty hack would be to make a file with
#!/bin/bash
. ./setvars
./main.py
and run that in the instance. That would however be really annoying, since I got lots of scripts I would like to run (with the same environment variables), and would then have to create a extra script for everyone of those.
Are there any other solution to get my environment variables inside docker without using a different Dockerfile and the method I described above?
Your best options is to use either the -e flag, or the --env-file of the docker run command.
The -e flag allows you to specify key/value pairs of env variable,
for example:
docker run -e ENVIRONMENT=PROD
You can use several time the -e flag to define multiple env
variables. For example, the docker registry itself is configurable
with -e flags, see:
https://docs.docker.com/registry/deploying/#running-a-domain-registry
The --env-file allow you to specify a file. But each line of the file
must be of type VAR=VAL
Full documentation:
https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables-e-env-env-file
I've created a docker image for my database server and one for the web application. Using the documentation I'm able to link between both container using the environment variables as follow:
value="jdbc:postgresql://${DB_PORT_5432_TCP_ADDR}:${DB_PORT_5432_TCP_PORT}/db_name"
It works fine now but it would be better that the environment variables are more general and without containing a static port number. Something like:
value="jdbc:postgresql://${DB_URL}:${DB_PORT}/db_name"
Is there anyway to link between the environment variables? for example by using the ENV command in the dockerfile ENV DB_URL=$DB_PORT_5432_TCP_ADDR or by using the argument --env by running the image docker run ... -e DB_URL=$DB_PORT_5432_TCP_ADDR docker_image ?
Without building this kind of functionality into your docker startup shell scripts or other orchestration mechanism, this is not possible at the moment to create environment variables like you are describing here. You do mention a couple of workarounds. However, the problem at least with using the -e DB_URL=... in your docker run command is that your $DB_PORT_5432_TCP_ADDR environment variable is not known at runtime, and so you will not be able to set this value when you run it. Typically, this is what your orchestration layer is used for, service discovery and passing this kind of data among your containers. There is at least one workaround mentioned here on SO that involves constructing a special shell script that you put in your CMD or ENTRYPOINT directives that passes the environment variable to the container.
I was looking around and I saw some simple examples of HelloWorld running on a docker container like this one:
http://dotnet.dzone.com/articles/docker-%E2%80%98hello-world-mono
at the end of the Dockerfile, the author calls:
CMD ["mono", "/src/hello.exe"]
What I want to do is have a reusable image as we build our Console App. Put that on a docker image using a Dockerfile. That part makes sense to me. But then I want to be able to pass the ConsoleApp parameters. Is that possible?
for example,
sudo docker run crystaltwix/helloworld -n "crystal twix"
where -n was a parameter I defined in my helloworld app.
You can use ENTRYPOINT foo rather than CMD foo to achieve this. All arguments after the docker run are passed to foo.
#seanmcl's answer is the simplest... if you have to pass secret values like application keys you may have to worry about exposing them is process lists.... So, you could use environment vars that you app looks for during startup:
SECRET_KEY="crystal twix"
docker run -e APP_KEY=$SECRET_KEY crystaltwix/helloworld