I have a flask app that uses rabbitmq where both are docker containers (along with other components, such as a celery workers). I want to use a common .env environment file for both dev and container use in my docker-compose.
Example .env
RABBITMQ_DEFAULT_HOST=localhost
Now, if I use this with with flask run it works fine as the container rabbitmq port is mapped to the host. If I run this inside the flask docker container, it fails because localhost of the flask container is not the same as the host. If I change localhost to my container name, rabbitmq.
RABBITMQ_DEFAULT_HOST=rabbitmq
It will resolve nice inside the flask container via docker to the dynamic ip of the rabbitmq container (local port map not even necessary), however, my flask run during development has no knowledge of this name / ip mapping and will fail.
Is there any easy way to handle this so it's easily portable to other devs and just "works" when either outside using flask run or inside the container via docker-compose?
I'd also like to limit the port exposure if possible, such as 127.0.0.1:5672:5672.
Update
So far, this is the best I've come up with.. in the program, I use a socket to check if the name resolves, if not, then it looks to the env with a default to localhost.
import socket
def get_rabbitmq_host() -> str:
try:
return socket.gethostbyname("rabbitmq") # container name
except socket.gaierror:
return os.getenv("RABBITMQ_DEFAULT_HOST", "localhost")
Here is another method I tried that's a lot faster (no dns timeout), but changes the order a bit.
def get_rabbitmq_host() -> str:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(1)
result = sock.connect_ex(("127.0.0.1", 5672))
sock.close()
if result == 0:
return "127.0.0.1"
elif (
os.getenv("RABBITMQ_DEFAULT_HOST") == "localhost"
or os.getenv("RABBITMQ_DEFAULT_HOST") == "127.0.0.1"
):
return "rabbitmq"
else:
return os.getenv("RABBITMQ_DEFAULT_HOST", "rabbitmq")
Well no, not really. Or yes, depending on how you view it.
Since now you find out that localhost does not mean the same in every context, nmaybe you should split up the variables, even though in some situations it maybe have the same value.
So just something like
rabbit_mq_internal_host=localhost
rabbit_mq_external_host=rabbitmq #container name!
Is there any easy way to handle this so it's easily portable to other devs and just "works" when either outside using flask run or inside the container via docker-compose?
Well: that is the point of the .env files. You have to different environments there, so make two different .env files. Or let everyone adjust the .env file according to her/his preferred way of running the app.
I'd also like to limit the port exposure if possible, such as 127.0.0.1:5672:5672
If you connect from container to container within a docker network, you do not need to publish the port at all. Only ports that have to be accessed from outside the network.
I am not sure if I completely understood your situation. I am assuming that you are developing the application and have environment which you would like to have it separated in accordance to the environment for example localhost, development, test etc ...
With that assumption as above. I would suggest to have env's in accordance to the environment like env_localhost, env_development where each key=value will be in accordance to the environment. Also, have an env.template file with empty key= so that if someone does not want a docker based runs then can setup that accordingly in a new file calling it the .env.
Once the above is created now you can modify your docker build for the app the Dockerfile I mean where you can utilise the following snippet. The important part is the environment variable called SETUP and the rename of the environment to .env during the build process:
# ... Other build commands follow
WORKDIR /usr/src/backend
COPY ./backend .
ARG SETUP=development # This is important environment we will pass in future. Defaults to a value like development.
COPY ./backend/env_${SETUP} .env # This is passed auto during docker-compose build I will tell that next.
# ... Other build commands follow
After the modification of the Dockerfile, now you can perform docker-compose build according to the environment by passing a SETUP as env to the build as follows:
docker-compose build --build-arg SETUP=localhost your_service_here
Additionally, once this process is stable you can create a Makefile and have make build-local, make build-dev and so on.
Related
It's been a few days since I've been trying to get docker container up and running, and always something goes wrong.
I need (mostly) LAMP stack, only instead MySQL -> mongoDb.
Of course I started by looking on docker hub and trying to compose some image from others. Googled after configs. Simplest one couldn't go past the stage of setting MONGODB_ADMIN_USER and MONGODB_ADMIN_PASSWORD and always returned with code 1, though mentioned variables were set in yml.
I tried to start with just centos/mongodb image, install apache, php and whatnot, commit it, and work on my own image, but without kernel it's hard to properly install and run apache within docker container.
So i tried once more, found promising project here: https://github.com/akhomy/docker-compose-lamp
but can't attach to the container, can't run localhost with default settings, though apparently composing stage goes ok.
Has anyone of You, by chance, working set of docker files / docker-compose?
Or some helpful hint? Really, looks like a straightforward task, take two images from docker hub, make docker-compose.yml, run docker-compose up, case closed. I can't wrap my head around this :|
Docker approach is not to put all services in one container but to have a single container for a single service. All Docker tools are aligned to this.
For your LAMP stack to start, you just have to download docker-compose, create docker-compose.yml file with 3 services defined and run docker-compose up
Docker compose is an orchestrating tool for containers, suited for single machine.
You need to have at least small tour over this tool, just for an example I provide sample config file:
docker-compose.yml
version: '3'
services:
apache:
image: bitnami/apache:latest
.. here goes apache config ...
db:
image: mongo
.. here goes apache config ...
php:
image: php
.. here goes php config ...
After you start this with docker-compose up you will get network created automatically for you and all services will join it. They will see each other under their names (lets say to connect to database from php you will use db as host name).
To connect to this stuff from host PC, you will need to expose ports explicitly.
I am having some trouble with my docker containers and environment variables.
Currently i have a docker-compose.yml with the following defined:
version: '2.1'
services:
some-service:
build:
context: .
image: image/replacedvalues
ports:
- 8080
environment:
- PROFILE=acc
- ENVA
- ENVB
- TZ=Europe/Berlin
some-service-acc:
extends:
service: some-service
environment:
- SERVICE_NAME=some-service-acc
Now when i deploy this manually (via SSH command line directly) on server A, it will take the environmental variables from Server A and put them in my container. So i have the values of ENVA and ENVB from the host in my container. Using the following command (after building the image ofcourse): docker-compose up some-service-acc.
We are currently developing a better infrastructure and want to deploy services via Jenkins. Jenkins is up and running in a docker container on server B.
I can deploy the service via Jenkins (Job-DSL, setting DOCKER_HOST="tcp://serverA:2375"temporary). So it will run all docker (compose) commands on ServerA from the Jenkins Container on Server B. The service is up and running except that it doesn't have values for the ENVA and the ENVB.
Jenkins runs the following with the Job-DSL groovy script:
withEnv(["DOCKER_HOST=tcp://serverA:2375"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
I tried setting them in my Jenkins container and on Server B itself but neither worked. Only when i deploy manually directly on Server A it works.
When i use docker inspect to inspect the running container, i get the following output for the env block:
"Env": [
"PROFILE=acc",
"affinity:container==JADFG09gtq340iggIN0jg53ij0gokngfs",
"TZ=Europe/Berlin",
"SERVICE_NAME=some-service-acc",
"ENVA",
"ENVB",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"JAVA_VERSION=8",
"JAVA_UPDATE=121",
"JAVA_BUILD=13",
"JAVA_PATH=e9e7ea248e2c4826b92b3f075a80e441",
"JAVA_HOME=/usr/lib/jvm/default-jvm",
"JAVA_OPTS="
]
Where do i need to set the Environmental variables so that they will be passed to the container? I prefer to store the variables on Server A. But if this is not possible, can someone explain me how it could be done? It is not an option to hardcode the values in the compose file or anywhere else in the source as they contain sensitive data.
If i am asking this in the wrong place, please redirect me to where i should be.
Thanks!
You need to set the environment variables in the shell that is running the docker-compose command line. In Jenkins, that's best done be inside your groovy script (Jenkins doesn't use the host environment within the build slave):
withEnv(["DOCKER_HOST=tcp://serverA:2375", "ENVA=hello", "ENVB=world"]) {
sh "docker-compose pull some-service-acc"
sh "docker-compose -p some-service-acc up -d some-service-acc"
}
Edit: from the comments, you also want to pass secrets.
To do that, there are plugins like the Mask Password that would allow you to pass variables without them showing up in the logs or job configuration. (I'm fairly certain a determined intruder could still get to the values since Jenkins itself knows it and passes it to your script in clear text.)
The better option IMO is to use a secrets management tool inside of docker. Hashicorp has their Vault product which implements an encrypted K/V store where values are accessed with a time limited token and offers the ability to generate new passwords per request with integration into the target system. I'd consider this the highest level of security when fully configured, but you can configure this countless ways to suit your own needs. You'll need to write something to pull the secret and inject it into your container's environment (it's a rest protocol that you can add to your entrypoint).
The latest option from Docker itself is secrets management that requires the new Swarm Mode. You save your secret in the swarm and add it to the containers you want as a file using an entry in the docker-compose.yml version 3 format. If you already use Swarm Mode and can start your containers with docker stack deploy instead of docker-compose, this is a fairly easy solution to implement.
There seems to be sparse conflicting information around on this subject. Im new to Docker and need some help. I have several docker containers to run an application, some require different config files for local development as they do for production. I don't seem to be able to find a neat way to automate this with Docker.
My containers that include custom config are Nginx, Freeradius and my code/data container is Laravel therefore requires a .env.php file (L4.2 at the moment).
I have tried Dockers environment variables in docker compose:
docker-compose.yml:
freeradius:
env_file: ./env/freeradius.env
./env/freeradius.env
DB_HOST=123.56.12.123
DB_DATABASE=my_database
DB_USER=me
DB_PASS=itsasecret
Except I can't pick those variables up in /etc/freeradius/mods-enabled/sql where they need to be.
How can I get Docker to run as a 'local' container with local config, or as a 'production' container with production config without having to actually build different containers, and without having to attach to each container to manually config them. I need it automated as this is to eventually be used on quite a large production environment which will have a large cluster of servers with many instances.
Happy to learn Ansible if this is how people achieve this.
If you can't use environment variables to configure the application (which is my understanding of the problem), then the other option is to use volumes to provide the config files.
You can use either "data volume containers" (which are containers with the sole purpose of sharing files and directories) with volumes_from, or you can use a named volume.
Data Volume container
If the go with the "data volume container" route, you would create a container with all the environment configuration files. Every service that needs a file uses volumes_from: - config. In dev you'd have something like:
configs:
build: dev-configs/
freeradius:
volumes_from:
- configs
The dev-configs directory will need a Dockerfile to build the image, which will have a bunch of VOLUME directives for all the config paths.
For production (and other environments) you can create an override file which replaces the configs service with a different container:
docker-compose.prod.yml:
configs:
build: prod-configs/
You'll probably have other settings you want to change between dev and prod, which can go into this file as well. Then you run compose with the override file:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
You can learn more about this here: http://docs.docker.com/compose/extends/#multiple-compose-files
Named Volume
If you go with the "named volume" route, it's a bit easier to configure. On dev you create a volume with docker volume create thename and put some files into it. In your config you use it directly:
freeradius:
volumes:
- thename:/etc/freeradius/mods-enabled/sql
In production you'll either need to create that named volume on every host, or use a volume driver plugin that supports multihost (I believe flocker is one example of this).
Runtime configs using Dockerize
Finally, another option that doesn't involve volumes is to use https://github.com/jwilder/dockerize which lets you generate the configs at runtime from environment variables.
I use Docker Compose to spin up my containers. I have a RethinkDB service container that exposes (amongst others) the host port in the following env var: APP_RETHINKDB_1_PORT_28015_TCP_ADDR.
However, my app must receive this host as an env var named RETHINKDB_HOST.
My question is: how can I alias the given env var to the desired one when starting the container (preferably in the most Dockerish way)? I tried:
env_file: .env
environment:
- RETHINKDB_HOST=$APP_RETHINKDB_1_PORT_28015_TCP_ADDR
but first, it doesn't work and second, it doesn't look as if it's the best way to go.
When one container is linked to another, it sets the environment variable, but also a host entry. For example,
ubuntu:
links:
rethinkdb:rethinkdb
will allow ubuntu to ping rethinkdb and have it resolve the IP address. This would allow you to set RETHINKDB_HOST=rethinkdb. This won't work if you are relying on that variable for the port, however, but that's the only thing I can think of besides adding a startup script or modifying your CMD.
If you want to modify your CMD, which is currently set to command: service rethink start, for example, just change it to prepend the variable assignment, e.g.
command: sh -c 'RETHINKDB_HOST=$APP_RETHINKDB_1_PORT_28015_TCP_ADDR && service rethink start'
The approach would be similar if you are using a startup script, you would just add that variable assignment as a line before the service starts
The environment variable name APP_RETHINKDB_1_PORT_28015_TCP_ADDR you are trying to use already contains the port number. It is already kind of "hard coded". I think you simply have to use this
environment:
- RETHINKDB_HOST=28015
I have my app inside a container and it's reading environment variables for passwords and API keys to access services. If I run the app on my machine (not inside docker), I just export SERVICE_KEY='wefhsuidfhda98' and the app can use it.
What's the standard approach to this? I was thinking of having a secret file which would get added to the server with export commands and then run a source on that file.
I'm using docker & fig.
The solution I settled on was the following: save the environment variables in a secret file and pass those on to the container using fig.
have a secret_env file with secret info, e.g.
export GEO_BING_SERVICE_KEY='98hfaidfaf'
export JIRA_PASSWORD='asdf8jriadf9'
have secret_env in my .gitignore
have a secret_env.template file for developers, e.g.
export GEO_BING_SERVICE_KEY='' # can leave empty if you wish
export JIRA_PASSWORD='' # write your pass
in my fig.yml I send the variables through:
environment:
- GEO_BING_SERVICE_KEY
- JIRA_PASSWORD
call source secret_env before building
docker run provides environment variables:
docker run -e SERVICE_KEY=wefsud your/image
Then your application would read SERVICE_KEY from the environment.
https://docs.docker.com/reference/run/
In fig, you'd use
environment:
- SERVICE_KEY: wefsud
in your app spec. http://www.fig.sh/yml.html
From a security perspective, the former solution is no worse than running it on your host if your docker binary requires root access. If you're allowing 'docker' group users to run docker, it's less secure, since any docker user could docker inspect the running container. Running on your host, you'd need to be root to inspect the environment variables of a running process.