I'm triying read a Environment Variable in a solr.properties file, the solr is running in a docker container and my docker-compose look:
solr:
environment:
- DB_NAME="xxxx"
My solr.properties is in /var/solr/ and i tried read de Environment Variable how:
jdbc.url=jdbc:mysql://localhost:3306/${DB_NAME}?zeroDateTimeBehavior=convertToNull&useUnicode=false
jdbc.url=jdbc:mysql://localhost:3306/${env.DB_NAME}?zeroDateTimeBehavior=convertToNull&useUnicode=false
jdbc.url=jdbc:mysql://localhost:3306/${env:DB_NAME}?zeroDateTimeBehavior=convertToNull&useUnicode=false
I'm starting with Docker, any idea ?
You access the environment variable with ${DB_NAME}. Also pay attention that you need to replace localhost with the name of the database service if your database runs in a different container (is also a service in docker-compose.yml) than solr (as it should be).
Related
In my docker-compose.yml, I defined two services, app and db.
version: "3.7"
services:
app:
image: my_app
container_name: my-app
ports:
- ${MY_PORT}:${MY_PORT}
env_file:
- ./app.env
...
depends_on:
- db
environment:
- DATABASE_URL=${DB_URL}
db:
image: my_db
container_name: my-db
env_file:
- ./db.env
ports:
- ${DB_PORT}:${DB_PORT}
As you can see above, I have defined two env files, app.env and db.env in the env_file option of app and db services.
app.env:
MY_PORT=8081
db.env:
DB_PORT=4040
DB_URL=postgres://myapp:app#db:4040/myapp
I want to check if my docker-compose can successfully read the environment variables. So, I run the command docker-compose config. However the output is
$ docker-compose config
WARNING: The MY_PORT variable is not set. Defaulting to a blank string.
WARNING: The DB_URL variable is not set. Defaulting to a blank string.
WARNING: The DB_PORT variable is not set. Defaulting to a blank string.
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.app.ports is invalid: Invalid port ":", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
services.db.ports is invalid: Invalid port ":", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
Why my docker compose can't read environment variables from those env files I declared in the env_file option in my docker-compose.yml?
Besides, I have another question, that's I understand that normally the env file shouldn't be version controlled since it could contain credentials. How normally should the env file be used for different environment e.g. development, staging and production environments? Imaging different environment has different values for those variables. Could someone please provide some examples?
The reason this is failing, is that the environment variables that you are defining the the external named app.env and db.env files, and specifying in the env_file option, are only being set inside the container that is started - and are not used for variable expansion inside the docker-compose.yml file when parsed by docker-compose.
This is easily confused with the option of supplying a file named .env in the same location as the docker-compose.yml file. Since docker-compose will look for a file specifically named .env next to the docker-compose.yml file (or next to the file that you are specifying with the -f switch) - and use the environment variables in that file for variable expansion in the docker-compose.yml file, before parsing it.
In other words:
The env_file option
Will set environment variables inside your container, is is just a convenience feature that allows you to externalise the environment variables from the docker-compose.yml file
Environment variables in these files will NOT be used for variable expansion in the docker-compose.yml file before parsed by docker-compose.
The .env file
Will be used for environment variable expansion inside the docker-compose.yml file before parsing.
Will NOT set environment variables inside the started container.
Suggested solution to the first question
If you migrate your values into a single .env file and place it in the same directory as your docker-compose.yml file, this should work.
Second question
As I understand your second question, you are asking how the .env file, or the env_file option should be used to configure your services for your different environments.
I do not think that there is a simple and single answer to this. It can be solved in a number of ways. But it also depends on what you are deploying to? Is it kubernetes? Docker swarm? Or just a single node docker host?
Kubernetes and Docker swarm have different means of helping you out with this.
Kubernetes secrets
Docker swarm secrets
Those are highly secure solutions, where operators of the secrets can be limited, and the secrets will not be seen by developers or operators that do not have access.
But for the single node docker host, not operating in swarm mode (secrets only work in swarm mode), there really isn't a lot of fancy options. You will have to manage this pretty manually in your build and deploy pipes as far as I am aware.
You are right that the sensitive configuration of your services, should not go in the same repository as the service definition. Things like root password for a database, or credentials to your service discovery service for your production environment do not need to live next to the sources.
Traditionally, another repository would contain this - giving you the oppotunity to limit the group of people that have this access. The build/deployment server/service will check out the new revision of your service, build it perhaps, and then check out the configuration repository and start the services with the configurations from there. And, make sure to remove the configuration files afterwards.
That would be the solution I would recommend for a single node docker host deployment regime - two repositories, and some scripting that ensures that the correct .env file is put in place during deployment, and removed again.
I hope this is helpful?
How can I share environment variables since the --link feature was deprecated?
The Docker documentation (https://docs.docker.com/network/links/) states
Warning: The --link flag is a legacy feature of Docker. It may
eventually be removed. Unless you absolutely need to continue using
it, we recommend that you use user-defined networks to facilitate
communication between two containers instead of using --link. One
feature that user-defined networks do not support that you can do with
--link is sharing environment variables between containers. However, you can use other mechanisms such as volumes to share environment
variables between containers in a more controlled way.
But how do I share environment variable by using volumes? I did not find anything about environment variables in the volumes section.
The problem that I have is that I want to set a database password as environment variable when I start the container. Some other container loads data into the database and for that needs to connect to it and provide the credentials. So far the loading container discovered the password on its own by reading the environment variable. How do I do that now without --link?
Generally, you do it by explicitly providing the same environment variable to other containers. This is easy if you're using a docker-compose.yml to manage your containers, because then you can do this:
version: 3
services:
database:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
frontend:
image: webserver
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
Then if you set MYSQL_ROOT_PASSWORD in your .env file, the same value will be provided to both the database and frontend container. If you're not using docker-compose, you can still simplify things by using an environment file. Create a file named, e.g., database.env that contains:
MYSQL_ROOT_PASSWORD=secret
Then point your containers at that using docker run --env-file database.env ....
You can't share environment variables using volumes, but you can of course share files. So another option would be to have your database container write a file containing the password to a shared volume, and then read that in your other containers.
I have an app that runs on several docker containers. To simplify my problem let's say I have 3 containers : one for MySQL and 2 for 2 instances of the api (sharing the same volume where the code is but with a different env specifying different database settings) as configured in the following docker-compose.yml
services:
api-1:
image: mynamespace/my-image-name:1.0
environment:
DB_NAME: db_api_1
api-2:
image: mynamespace/my-image-name:1.0
environment:
DB_NAME: db_api_2
In a Makefile I have rules for deploying the containers and installing the database for each of my api instances.
What I am trying to achieve is to create a make rule that dumps a database given an env. Knowing that I have no MySQL client installed on my api instances, I thought there should be a way to extract the env variables I need (with printenv VARNAME) from an api container then use it in the database container.
Anyone knows how this could be achieved ?
Assuming that it's an environment variable that you set using the -e option to docker run, you could do something like this:
docker exec api_container sh -c 'echo $VARNAME'
If it is environment variable that was set inside the container e.g. from a script, then you're mostly out of luck. You could of course inspect /proc/<pid>/environ, but that's hacky and I wouldn't recommend it.
It also sounds as if you would benefit from using something like docker-compose to manage your containers.
Context
We are migrating an older application to docker and as a first step we're working against some constraints. The database can not yet be put in a container, and moreover, it is shared between all developers in our team. So this question is to find a fix for a temporary problem.
To not clash with other developers using the same database, there is a system in place where each developer machine starts the application with a value that is unique to his machine. Each container should use this same value.
Question
We are using docker-compose to start the containers. Is there a way to provide a (environment) variable to it that gets propagated to all containers?
How I'm trying to do it:
My docker-compose.yml looks kind of like this:
my_service:
image: my_service:latest
command: ./my_service.sh
extends:
file: base.yml
service: base
environment:
- batch.id=${BATCH_ID}
then I thought running BATCH_ID=$somevalue docker-compose up my_service would fill in the ${BATCH_ID}, but it doesn't seem to work that way.
Is there another way? A better way?
Optional: Ideally everything should be contained so that a developer can just call docker-compose up my_service leading to compose itself calculating a value to pass to all the containers. But from what I see online, I think this is not possible.
You are correct. Alternatively you can just specify the env var name:
my_service:
environment:
- BATCH_ID
So the var BATCH_ID is defined from the current docker-compose execution scope; and passed to the container with the same name.
I don't know what I changed, but suddenly it works as described.
BATCH_ID is the name of the environment variable on the host.
batch.id will be the name of the environment variable inside the container.
my_service:
environment:
- batch.id=${BATCH_ID}
I use Docker Compose to spin up my containers. I have a RethinkDB service container that exposes (amongst others) the host port in the following env var: APP_RETHINKDB_1_PORT_28015_TCP_ADDR.
However, my app must receive this host as an env var named RETHINKDB_HOST.
My question is: how can I alias the given env var to the desired one when starting the container (preferably in the most Dockerish way)? I tried:
env_file: .env
environment:
- RETHINKDB_HOST=$APP_RETHINKDB_1_PORT_28015_TCP_ADDR
but first, it doesn't work and second, it doesn't look as if it's the best way to go.
When one container is linked to another, it sets the environment variable, but also a host entry. For example,
ubuntu:
links:
rethinkdb:rethinkdb
will allow ubuntu to ping rethinkdb and have it resolve the IP address. This would allow you to set RETHINKDB_HOST=rethinkdb. This won't work if you are relying on that variable for the port, however, but that's the only thing I can think of besides adding a startup script or modifying your CMD.
If you want to modify your CMD, which is currently set to command: service rethink start, for example, just change it to prepend the variable assignment, e.g.
command: sh -c 'RETHINKDB_HOST=$APP_RETHINKDB_1_PORT_28015_TCP_ADDR && service rethink start'
The approach would be similar if you are using a startup script, you would just add that variable assignment as a line before the service starts
The environment variable name APP_RETHINKDB_1_PORT_28015_TCP_ADDR you are trying to use already contains the port number. It is already kind of "hard coded". I think you simply have to use this
environment:
- RETHINKDB_HOST=28015