I have packaged a web app I've been working on as a docker image.
I want to be able to start the image with some configuration, like this is the url of the couchdb server to use, etc.
What is the best way of supplying configuration? My app relies on env variables can I set these at run time?
In addition to setting environment variables during docker run (using -e/--env and --env-file) as you already discovered, there are other options available:
Using --link to link your container to (for instance) your couchdb server. This will work if your server is also a container (or if you use an ambassador container to another server). Linking containers will make some environment variables available, including server IP and port, that your script can use. This will work if you only need to set references to services.
Using volumes. Volumes defined in the Dockerfile can be mapped to host folders, so you can use them to access configuration files, for instance. This is useful for very complex configurations.
Extending the image. You can create a new image based on your original and ADD custom configuration files or ENV entries. This is the least flexible option but is useful in complex configuration to simplify the launching, specially when the configuration is mostly static (probably a bad idea for services/hostnames, but can work for frameworks that can be configured differently for dev/production). Can be combined with any of the above.
It seems docker supports setting env variables - should have read the manual!
docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
http://docs.docker.com/reference/commandline/cli/#run
Related
I am installing OpenStack Keystone now.
For standalone Keystone needs three components: mysql, python, and apache2.
Absolutely I can’t pick all of them to the base, I made python as a base image, and others were inserted as RUN statements for installing mysql and apache2.
I think that the RUN statements are duplication because all the three components exist on Docker Public Registry.
Is there any good solution or proper way to reuse the existing external Dockerfile???
There seems to be some confusion here about what a Dockerfile does: it defines a single Docker image. In general, the recommended way to run applications in Docker is to have a container for each service and have them connect to other services in other containers as needed (more on this later).
In your case, it sounds like your application consists of OpenStack Keystone (which requires Python and Apache to run) and MySQL. So I would install Python & Apache in your Dockerfile, and set up MySQL (possibly just using the image from the public repository) as a separate container that the OpenStack container connects to over the network.
As mentioned above, this scenario is the recommended way to run Docker applications - it follows the Unix paradigm of "each app does only one thing, but does it very well". Each container does one thing only and connects to any other services in other containers. But it is possible to run multiple services in the same container - eg. Keystone running on Apache/python AND MySQL in the same container. If this is your goal, you would write a Dockerfile that installs everything and gets everything running together. This Dockerfile is likely to be pretty complicated and will require an ENTRYPOINT that gets both MySQL and Apache working together. You're likely to wind up duplicating a lot of the work that has already gone into the standard MySQL and Apache images.
You can make use of Docker Compose to run you application having mysql, python, and apache2.
Using Docker compose will allow you to control the application setup using a single command. You just need to write a DockerCompose.yml file and also Dockerfiles corresponding to containers you will setup.
In your case you can have a dockerfile for setting up a python and apache2 container and other Dockerfile having mysql as the base image for setting up the said container.
I am currently working on a distributed graph processing platform which maintains an Akka cluster inside of docker containers and have recently been granted access to a large cluster to test this. Unfortunately, this cluster does not run docker, only singularity.
This did not initially seem an issue as singularity supports docker images, however, due to the nature of the Akka cluster, I have to past several environment variables and bind several ports. As an example, a 'Partition Manager' within the system would be run with the following command:
docker run -p $PM0Port:2551 --rm -e "HOST_IP=$IP" -e "HOST_PORT=$PM0Port" -v $entityLogs:/logs/entityLogs $Image partitionManager $PM0ID $NumberOfPartitions $ZooKeeper
From looking through the Singularity documentation I can see that I can create a 'Singularity' file and specify the environment variables, but there doesn't seem to be any documentation on binding custom ports. Nor does it explain how I could pass arguments to the default entrypoint (The project is compiled with 'sbt docker:publish' so I am not sure exactly where this would be to reassign it).
Even if this was the solution, as there are multiple actor types (and several instances of each) it appears specifying environment variables and ports in a document would require templating, creating the files at run time, and building an image for each individual actor.
I am sure I have completely missed a page somewhere which would nicely translate this docker command into the equivalent singularity, but I just can't find it.
There is no network isolation in Singularity, so there is no need to map any port. If the process inside the container binds to an IP:port, it will be immediately reachable on the host.
I am setting up a brand new development environment, nginx+php-fpm and decided to create application containers (using docker) for each service.
Normally I would install nginx and php and modify the configuration (with and editor like vim), reload the services until the services were correctly configured.
To establish a similar procedure starting the initial container and copying the /etc/nginx to the host. Modify the config files in the host and use a docker file (containing another COPY) to test the changes.
Given that the containers are somewhat ephemeral and aren't really meant to contain utilities like vim I was wondering how people set up the initial configuration ?
Once I have a working config I know the options with regards to configuration management for managing the files. It's really the establishment of new containers that I'm curious about.
You can pass configuration through environment variables or mount a host file as a data volume.
Example for nginx configuration:
docker run --name some-nginx -v /yoursystem/nginx.conf:/etc/nginx/nginx.conf:ro -d nginx
You can use many images from Docker Hub as starting point: nginx-php-fmp.
I've created a docker image for my database server and one for the web application. Using the documentation I'm able to link between both container using the environment variables as follow:
value="jdbc:postgresql://${DB_PORT_5432_TCP_ADDR}:${DB_PORT_5432_TCP_PORT}/db_name"
It works fine now but it would be better that the environment variables are more general and without containing a static port number. Something like:
value="jdbc:postgresql://${DB_URL}:${DB_PORT}/db_name"
Is there anyway to link between the environment variables? for example by using the ENV command in the dockerfile ENV DB_URL=$DB_PORT_5432_TCP_ADDR or by using the argument --env by running the image docker run ... -e DB_URL=$DB_PORT_5432_TCP_ADDR docker_image ?
Without building this kind of functionality into your docker startup shell scripts or other orchestration mechanism, this is not possible at the moment to create environment variables like you are describing here. You do mention a couple of workarounds. However, the problem at least with using the -e DB_URL=... in your docker run command is that your $DB_PORT_5432_TCP_ADDR environment variable is not known at runtime, and so you will not be able to set this value when you run it. Typically, this is what your orchestration layer is used for, service discovery and passing this kind of data among your containers. There is at least one workaround mentioned here on SO that involves constructing a special shell script that you put in your CMD or ENTRYPOINT directives that passes the environment variable to the container.
I have two containers webinterface and db, while webinterface is started using the --link option (for db) which generates the environment variables
DB_PORT_1111_TCP=tcp://172.17.0.5:5432
DB_PORT_1111_TCP_PROTO=tcp
DB_PORT_1111_TCP_PORT=1111
DB_PORT_1111_TCP_ADDR=172.17.0.5
...
Now my webinterface container uses a Dockerfile where some static environment variables are defined to define the connection:
ENV DB_HOST localhost
ENV DB_PORT 2222
Knowing that there is also an -e option for docker run, the problem is that I want to use those variables in the Dockerfile (used in some scripts) but overwrite them with the values generated with the --link option, i.e. something like:
docker run -d -e DB_HOST=$DB_PORT_1111_TCP_ADDR
This would use the host's defined environment variable which doesn't work here.
Is there a way to handle this?
This is a variable expansion issue so to resolve try the following:
docker run -d -e DB_HOST="$DB_PORT"_1111_TCP_ADDR
With a Unix process that is already running, its environment variables can only be changed from inside the process, not from the outside, so their are somewhat non-dynamic by nature.
If you find Docker links limiting, you are not the only person out there. One simple solution to this would be using WeaveDNS. With WeaveDNS you can simply use default ports (as with Weave overlay network there is no need to expose/publish/remap any internal ports) and resolve each component by via DNS (i.e. your app would just need to look for db.weave.local, and doesn't need to be aware of clunky environment variable scheme that Docker links present). To get a better idea of how WeaveDNS works, checkout one of the official getting started guides. WeaveDNS effectively gives you service discovery without having to modify the application you have.