I'm new to portainer. I have only set up a couple of services in the last week, but until now it worked quite well. However, now I'm facing a problem whose origin I just don't understand currently:
I set up a docker compose service which is pulled from a Github repository. However, somehow in the container appears an env var value which is never set. The env vars in question are VIRTUAL_HOST, which seems to be set to portainer, and VIRTUAL_PORT, which is set to 9443.
In the repository, the value portainer does not exist, the env vars however are declared as follows:
environment:
- VIRTUAL_HOST
- VIRTUAL_PORT
In the portainer stack I loaded a .env file with other environment variables and added the VIRTUAL_HOST and VIRTUAL_PORT manually, both to other values than then appear in the container.
When I open the container view in portainer, it actually show the wrong values. If I replace them forcefully in the container edit view, it will update in the container and everything works. If I remove and restart all containers, the wrong values are there again.
So the point is: where else can I expect the environment values to be injected from?
Related
I have a flask app that uses rabbitmq where both are docker containers (along with other components, such as a celery workers). I want to use a common .env environment file for both dev and container use in my docker-compose.
Example .env
RABBITMQ_DEFAULT_HOST=localhost
Now, if I use this with with flask run it works fine as the container rabbitmq port is mapped to the host. If I run this inside the flask docker container, it fails because localhost of the flask container is not the same as the host. If I change localhost to my container name, rabbitmq.
RABBITMQ_DEFAULT_HOST=rabbitmq
It will resolve nice inside the flask container via docker to the dynamic ip of the rabbitmq container (local port map not even necessary), however, my flask run during development has no knowledge of this name / ip mapping and will fail.
Is there any easy way to handle this so it's easily portable to other devs and just "works" when either outside using flask run or inside the container via docker-compose?
I'd also like to limit the port exposure if possible, such as 127.0.0.1:5672:5672.
Update
So far, this is the best I've come up with.. in the program, I use a socket to check if the name resolves, if not, then it looks to the env with a default to localhost.
import socket
def get_rabbitmq_host() -> str:
try:
return socket.gethostbyname("rabbitmq") # container name
except socket.gaierror:
return os.getenv("RABBITMQ_DEFAULT_HOST", "localhost")
Here is another method I tried that's a lot faster (no dns timeout), but changes the order a bit.
def get_rabbitmq_host() -> str:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(1)
result = sock.connect_ex(("127.0.0.1", 5672))
sock.close()
if result == 0:
return "127.0.0.1"
elif (
os.getenv("RABBITMQ_DEFAULT_HOST") == "localhost"
or os.getenv("RABBITMQ_DEFAULT_HOST") == "127.0.0.1"
):
return "rabbitmq"
else:
return os.getenv("RABBITMQ_DEFAULT_HOST", "rabbitmq")
Well no, not really. Or yes, depending on how you view it.
Since now you find out that localhost does not mean the same in every context, nmaybe you should split up the variables, even though in some situations it maybe have the same value.
So just something like
rabbit_mq_internal_host=localhost
rabbit_mq_external_host=rabbitmq #container name!
Is there any easy way to handle this so it's easily portable to other devs and just "works" when either outside using flask run or inside the container via docker-compose?
Well: that is the point of the .env files. You have to different environments there, so make two different .env files. Or let everyone adjust the .env file according to her/his preferred way of running the app.
I'd also like to limit the port exposure if possible, such as 127.0.0.1:5672:5672
If you connect from container to container within a docker network, you do not need to publish the port at all. Only ports that have to be accessed from outside the network.
I am not sure if I completely understood your situation. I am assuming that you are developing the application and have environment which you would like to have it separated in accordance to the environment for example localhost, development, test etc ...
With that assumption as above. I would suggest to have env's in accordance to the environment like env_localhost, env_development where each key=value will be in accordance to the environment. Also, have an env.template file with empty key= so that if someone does not want a docker based runs then can setup that accordingly in a new file calling it the .env.
Once the above is created now you can modify your docker build for the app the Dockerfile I mean where you can utilise the following snippet. The important part is the environment variable called SETUP and the rename of the environment to .env during the build process:
# ... Other build commands follow
WORKDIR /usr/src/backend
COPY ./backend .
ARG SETUP=development # This is important environment we will pass in future. Defaults to a value like development.
COPY ./backend/env_${SETUP} .env # This is passed auto during docker-compose build I will tell that next.
# ... Other build commands follow
After the modification of the Dockerfile, now you can perform docker-compose build according to the environment by passing a SETUP as env to the build as follows:
docker-compose build --build-arg SETUP=localhost your_service_here
Additionally, once this process is stable you can create a Makefile and have make build-local, make build-dev and so on.
I'm experiencing very strange behaviour with docker-compose. I have a repository configured to work with docker swarm for production and docker-compose for development. Swarm is working OK in production, but docker-compose is having strange behaviour.
Specifically, I'm defining build arguments with parameter substitution, like this
build:
context: .
args:
- APP_DIRECTORY=${APP_DIRECTORY:-/srv/app}
- APP_ENV=${APP_ENV:-dev}
When APP_ENV is not defined or is empty, it should take the value dev. This was working fine, but now it's taking the value prod when the variable is not defined. I rebooted, cleared all envrionment variables, even removed docker-compose and installed it again, and APP_ENV is still being assinged prod. Is there some sort of caching done by compose that I'm not being aware of?
Another strange behaviuor, is that docker-compose is passing proxy-related envionment variables to the container. Those variables are not specified on the compose file, and they are not even present on the host. Again, is there some soer of caching taking place? And why is docker-compose passing env variables that I didn't ask for to the container?
I was making a stupding mistake, I had a .env file in the same directory, and docker-compose was reading the variables from the file.
After reading the config point of the 12 factor app I decided to override my config file containing default value with environment variable.
I have 3 Dockerfiles, one for an API, one for a front-end and one for a worker. I have one docker-compose.yml to run those 3 services plus a database.
Now I'm wondering if I should define the environment variables in Dockerfiles or docker-compose.yml ? What's the difference between using one rather than another ?
See this:
You can set environment variables in a service’s containers with the 'environment' key, just like with docker run -e VARIABLE=VALUE ...
Also, you can use ENV in dockerfile to define a environment variable.
The difference is:
Environment variable define in Dockerfile will not only used in docker build, it will also persist into container. This means if you did not set -e when docker run, it will still have environment variable same as defined in Dockerfile.
While environment variable define in docker-compose.yaml just used for docker run.
Maybe next example could make you understand more clear:
Dockerfile:
FROM alpine
ENV http_proxy http://123
docker-compose.yaml:
app:
environment:
- http_proxy=http://123
If you define environment variable in Dockerfile, all containers used this image will also has the http_proxy as http://123. But the real situation maybe when you build the image, you need this proxy. But, the container maybe run by other people maybe not need this proxy or just have another http_proxy, so they had to remove the http_proxy in entrypoint or just change to another value in docker-compose.yaml.
If you define environment variable in docker-compose.yaml, then user could just choose his own http_proxy when do docker-compose up, http_proxy will not be set if user did not configure it docker-compose.yaml.
I'm wondering if I've stumbled on a bug or that there's something not properly documented about variable substitution on Windows in combination with Docker Machine and Compose (installed version of docker is 1.11.1).
If I run the "docker-compose up" command for a yml file that looks like this:
volumes:
- ${FOOBAR}/build/:/usr/share/nginx/html/
And this variable doesn't exist docker compose will correctly complain about it:
The foobar variable is not set. Defaulting to a blank string.
However, when I change it to an existing environment variable:
volumes:
- ${PROJECT_DIR}/build/:/usr/share/nginx/html/
It will then not properly start the container and displays the following error (trying to access the nginx container will give you a host is unreachable message):
ERROR: for nginx rpc error: code = 2 desc = "oci runtime error: could not synchronise with container process: not a directory"
If I run the echo command in the Docker Quickstart Terminal it will output the correct path that I've set in the environment variable. If I replace the ${PROJECT_DIR} with the environment variable value the container runs correctly.
I get the same type of error message if I try to use the environment variable for the official php image instead of the official nginx image. In both cases the docker compose file works if I substitute ${PROJECT_DIR} text with the content of the environment variable.
So is this a bug or am I missing something?
After some mucking about I've managed to get the containers to start correctly without error messages if I use the following (contains the full path to the local files):
volumes:
- ${PROJECT_DIR}:/usr/share/nginx/html/
The nginx container is then up and running though it cannot find the files then anymore. If I replace the variable with the path it contains it then can find the files again.
Above behaviour isn't consistent. When I added a second environment variable for substitution it gave the oci runtime error. Kept giving it when I removed that second variable and only started working again when I also removed the first variable. After that it suddenly accepted ${PROJECT_DIR}/build/ but still without finding files.
Starting a bash session to the nginx container shows that the mount point for the volume contains no files.
I'm really at a loss here what docker is doing and what it expects from me. Especially as I have no idea to what it is expanding the variables in the compose file.
In the end the conclusion is that variable substitution is too quirky on Windows with Docker Machine to be useful. However, there is an alternative to variable substitution.
If you need a docker environment that does the following:
Can deploy on different computers that don't run the same OS
Doesn't care if the host uses Docker natively or via Virtual Box (this can require path changes)
Then your best bet is to use extending.
First you create the docker-compose.yml file that contains the images you'll need. For example an php image with MySQL:
php:
image: 5.5-apache
links:
- php_db:mysql
- maildev:maildev
ports:
- 8080:80
php_db:
image: mariadb
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: examplepass
You might notice that there aren't any volumes defined in this docker-compose file. That is something we're going to define in a file called docker-compose.override.yml:
php:
volumes:
- /workspaces/Eclipse/project/:/var/www/html/
When you have both files in one directory docker-compose does something interesting. It combines them into one adding/overwriting settings in the docker-compose.yml with those present in docker-compose.override.yml.
Then when running the command docker-compose up it will result in a docker run that is configured for the machine you're working on.
You can get similar behaviour with custom files names if you change a few things in your docker-compose command:
docker-compose -f docker-compose.yml -f docker-compose.conf.yml up
The detail is that docker-compose can accept multiple compose files and it will combine them into one. This happens from left to right.
Both methods allows you to create a basic compose file that configures the containers you need. You then can override/add the settings you need for the specific computer you're running docker on.
The page Overview of docker-compose CLI has more details on how these commands work.
I use Docker Compose to spin up my containers. I have a RethinkDB service container that exposes (amongst others) the host port in the following env var: APP_RETHINKDB_1_PORT_28015_TCP_ADDR.
However, my app must receive this host as an env var named RETHINKDB_HOST.
My question is: how can I alias the given env var to the desired one when starting the container (preferably in the most Dockerish way)? I tried:
env_file: .env
environment:
- RETHINKDB_HOST=$APP_RETHINKDB_1_PORT_28015_TCP_ADDR
but first, it doesn't work and second, it doesn't look as if it's the best way to go.
When one container is linked to another, it sets the environment variable, but also a host entry. For example,
ubuntu:
links:
rethinkdb:rethinkdb
will allow ubuntu to ping rethinkdb and have it resolve the IP address. This would allow you to set RETHINKDB_HOST=rethinkdb. This won't work if you are relying on that variable for the port, however, but that's the only thing I can think of besides adding a startup script or modifying your CMD.
If you want to modify your CMD, which is currently set to command: service rethink start, for example, just change it to prepend the variable assignment, e.g.
command: sh -c 'RETHINKDB_HOST=$APP_RETHINKDB_1_PORT_28015_TCP_ADDR && service rethink start'
The approach would be similar if you are using a startup script, you would just add that variable assignment as a line before the service starts
The environment variable name APP_RETHINKDB_1_PORT_28015_TCP_ADDR you are trying to use already contains the port number. It is already kind of "hard coded". I think you simply have to use this
environment:
- RETHINKDB_HOST=28015