I'm running openresty nginx within official alpine-fat docker image, and openresty process starts with nobody user.
I need to set nginx variable with the next string:
set_by_lua $var 'return os.getenv("ENV_VAR")';
docker-compose.yml contains the next block:
build:
context: .
dockerfile: ./Dockerfile.nginx
environment:
- ENV_VAR=value
But, nginx worker process seems not getting its value, and $var remains empty.
I tried to add export ENV_VAR=value to /etc/profile file, but no use.
I tried to run openresty with nginx user, but it also can't see the value of ENV_VAR variable.
How can I make that thing work, if I can?
Try adding env ENV_VAR; to your nginx config. By default nginx will discard all environment variables, this will allow to save it.
From https://nginx.org/en/docs/ngx_core_module.html#env
Syntax: env variable[=value];
Default:
env TZ;
Context: main
By default, nginx removes all environment variables inherited from its parent process except the TZ variable. This directive allows preserving some of the inherited variables, changing their values, or creating new environment variables.
Related
I am setting env variable in dockerfile which is not reflecting for all the users when i start the deployment. Unless i go to the terminal and do source /etc/profile.d/ip.sh then only it gets affected to the user. Without doing this activity can we set this value to all the users. If yes how do we need to achieve it.
Created shell script and added in /etc/profile.d/ip.sh:
IP="1.1.1.1"
export IPADDR=$IP
Dockerfile:
COPY ip.sh /etc/profile.d
RUN chmod 644 /etc/profile.d/ip.sh
RUN . ./etc/profile.d/ip.sh
please read the environment variables section of the docker manual
the reason you should not get what you want is that Docker does not behave the same as full OS therefore you do not get the scripts at startup.
here are some examples from there:
Set environment variables in containers
You can set environment variables in a service’s containers with the ‘environment’ key, just like with docker run -e VARIABLE=VALUE ...:
web:
environment:
- DEBUG=1
The “env_file” configuration option🔗
You can pass multiple environment variables from an external file through to a service’s containers with the ‘env_file’ option, just like with docker run --env-file=FILE ...:
web:
env_file:
- web-variables.env
The “.env” file
You can set default values for any environment variables referenced in the Compose file, or used to configure Compose, in an environment file named .env:
$ cat .env
TAG=v1.5
$ cat docker-compose.yml
version: '3'
services:
web:
image: "webapp:${TAG}"
I'm trying to construct a Docker container in which I have two environment variables set to the same thing. I have
version: "3.2"
services:
sql-server-db:
image: mcr.microsoft.com/mssql/server:latest
ports:
- 3900:1433
env_file: ./tests/.test_env
...
command: /bin/bash /my-app/my-script.sh
and then in my tests/.test_env file I have
MY_DB_PASSWORD=reallylongpassword
SA_PASSWORD=${MY_DB_PASSWORD}
I would like to set the "MY_DB_PASSWORD" and "SA_PASSWORD" env vars to the same thing, however, the above doesn't do it, because "SA_PASSWORD" seems to be set to the literal string "${MY_DB_PASSWORD}". How do I set my two variables to the same thing without hard-coding the "reallylongpassword" string twice?
This is not possible using env file, simply do in your script like this. This way is more flexible as it will take the value if not set.
SA_PASSWORD="${SA_PASSWORD:-$MY_DB_PASSWORD}"
If you are worry about password security, you should use docker secret but you need to run in Swarm mode.
Here is the dockerfile used by the image. You can define your own CMD to override the behavior and take your env or use the trick to take default value.
CMD .\start -sa_password $env:sa_password -ACCEPT_EULA $env:ACCEPT_EULA -attach_dbs \"$env:attach_dbs\" -Verbose
//using you MY_DB_PASSWORD instead of SA_PASSWORD
CMD .\start -sa_password $env:my_db_password -ACCEPT_EULA $env:ACCEPT_EULA -attach_dbs \"$env:attach_dbs\" -Verbose
After reading the config point of the 12 factor app I decided to override my config file containing default value with environment variable.
I have 3 Dockerfiles, one for an API, one for a front-end and one for a worker. I have one docker-compose.yml to run those 3 services plus a database.
Now I'm wondering if I should define the environment variables in Dockerfiles or docker-compose.yml ? What's the difference between using one rather than another ?
See this:
You can set environment variables in a service’s containers with the 'environment' key, just like with docker run -e VARIABLE=VALUE ...
Also, you can use ENV in dockerfile to define a environment variable.
The difference is:
Environment variable define in Dockerfile will not only used in docker build, it will also persist into container. This means if you did not set -e when docker run, it will still have environment variable same as defined in Dockerfile.
While environment variable define in docker-compose.yaml just used for docker run.
Maybe next example could make you understand more clear:
Dockerfile:
FROM alpine
ENV http_proxy http://123
docker-compose.yaml:
app:
environment:
- http_proxy=http://123
If you define environment variable in Dockerfile, all containers used this image will also has the http_proxy as http://123. But the real situation maybe when you build the image, you need this proxy. But, the container maybe run by other people maybe not need this proxy or just have another http_proxy, so they had to remove the http_proxy in entrypoint or just change to another value in docker-compose.yaml.
If you define environment variable in docker-compose.yaml, then user could just choose his own http_proxy when do docker-compose up, http_proxy will not be set if user did not configure it docker-compose.yaml.
I have a docker-compose file that allows me to pass the environment variables as a file (.env file). As I have multiple ENV variables, Is there any option in Dockerfile like env_file in docker-compose for passing multiple environment variables during docker build?
This is the docker-compose.yml
services:
web:
image: "node"
links:
- "db"
env_file: "env.app"
AFAIK, there is no such way to inject environment variables using a file during the build step using Dockerfile. However, in most cases, people do end up using an entrypoint script & injecting variables during the docker run or docker-compose up.
In case it's a necessity you might need to write a shell wrapper which will change the values in the Dockerfile dynamically by taking a key-value pair text file as an input or make it something as below but the ENV file name need to be included in Dockerfile.
COPY my-env-vars /
RUN export $(cat my-env-vars | xargs)
It's an open issue - https://github.com/moby/moby/issues/28617
PS - You need to be extra careful while using this approach because the secrets are baked into the image itself.
I want to pass environment variables that is readable by applications spin up by docker-compose up.
What is the proper way of using docker-compose up with varying configuration settings?
I don't want to use .env & environment: config as the environment variables are changing frequently & it is insecure to save tokens in a file.
docker-compose run -e does work a bit, but loses many.
It does not map the ports that defined in docker-compose.yml services.
Also multiple services are defined in docker-compose.yml and I don't want to use depends_on just because docker-compose up doesn't work.
Let's say I define service in docker-compose.yml
service-a:
build:
context: .
dockerfile: DockerfileA
command: node serviceA.js
In my serviceA.js, I simply use the environment variable:
console.log("This is ", process.env.KEY, "running in service A");
When I run docker-compose run -e KEY=DockerComposeRun service-a
I do get the environment variable KEY read by serviceA.js
This is DockerComposeRun running in service A
However I could only get one single service running.
I could have use environment: in docker-compose.yml
environment:
- KEY=DockerComposeUp
But in my use case, each docker compose would have different environment variable values, meaning I would need to edit the file each time before I do docker-compose.
Also, not only single service would use the same environment variable, .env even done a better job, but it is not desired.
There doesn't seem to be a way to do the same for docker-compose up
I have tried KEY=DockerComposeUp docker-compose up,
but what I get is undefined .
Export doesn't work for me as well, it seems they are all about using environment variable for docker-compose.yml instead of for the applications in container
To safely pass sensitive configuration data to your containers you can use Docker secrets. Everything passed through Secrets is encrypted.
You can create and manage secrets using the commands below:
docker secret create
docker secret inspect
docker secret ls
docker secret rm
And use them in your docker-compose file, either referring to existing secrets (external) or use a file:
secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: true
You can use environment like this:
service-a:
build:
context: .
dockerfile: DockerfileA
command: node serviceA.js
environment:
KEY=DockerComposeRun
Refer at: https://docs.docker.com/compose/environment-variables/