docker remote api set env - docker

How to overwrite environment variables inside container after creation with remote API? I see no such option in container update method description. But docker itself is doing that when linking containers (source) to provide port and host variables:
DB_PORT_5432_TCP_PORT=5432
DB_PORT_5432_TCP_ADDR=172.17.0.5
...
I need to provide the same variables of other infrastructure elements which are not managed by docker. And each time I run a container this variables could be different.
I think it should look like:
initialize container's dependencies.
Create container itself.
Run container's dependencies.
Get dependencies parameters (IP, ports, etc).
Configure container environment (as I thought with container update).
Run container.
Steps from 3 to 6 could be repeated many times for one instance.

Related

Ansible commands on docker containers?

Upto now i had setup my ansible-playbook commands running on AWS EC2 instances.
can i run regular ansible commands like (linefile, apt, pip, etc) on container?
can i add my container-ip to hosts file in container-group and then does the same code works, here if i chanage my main.yml file that has
hosts: ec2-group
to
hosts:contaniers-group
does all commands work?
i am bit beginner into this..please do confirm me i am actually thinking of making docker-compose files from scratch, and run docker-compose commands using ansible.
You can, but it's not really how Docker is designed to be used.
A Docker container is usually a wrapper around a single process. In the standard setup you create an image that has that application built and packaged, and you can just run it without any further setup. It's not usually interesting to run a bare Linux distribution container (which won't have an application installed) or to run an interactive shell as the main container process. Tutorials like Docker's Build and run your image walk through this sequence.
A corollary to this is that containers don't usually have any local state. In the best case any state a container needs is in an external database; if you can't do that then you store local state in a volume that outlives the container.
Finally, it's extremely routine to delete and recreate containers. You need to do this to change some common options; in a cluster environment like Kubernetes this can happen outside your control. When this happens the new container will restart running its default setup, and it won't know about any manual changes the previous container might have had.
So you don't usually want to try to install software directly in a running container, since that will get lost as soon as the container exits. You can, in principle, get a shell in a container (via docker exec) but this is more of a debugging tool than an administration tool. You could make the only process a container runs be an ssh daemon, but anything you start this way will get lost as soon as the container exits (and I've never seen a recipe that correctly and securely sets up credentials to access it).
I'd recommend learning the standard Dockerfile system and running self-contained Docker images over trying to adapt Ansible to this rather different environment.

Binding ports when running Docker images in Singularity

I am currently working on a distributed graph processing platform which maintains an Akka cluster inside of docker containers and have recently been granted access to a large cluster to test this. Unfortunately, this cluster does not run docker, only singularity.
This did not initially seem an issue as singularity supports docker images, however, due to the nature of the Akka cluster, I have to past several environment variables and bind several ports. As an example, a 'Partition Manager' within the system would be run with the following command:
docker run -p $PM0Port:2551 --rm -e "HOST_IP=$IP" -e "HOST_PORT=$PM0Port" -v $entityLogs:/logs/entityLogs $Image partitionManager $PM0ID $NumberOfPartitions $ZooKeeper
From looking through the Singularity documentation I can see that I can create a 'Singularity' file and specify the environment variables, but there doesn't seem to be any documentation on binding custom ports. Nor does it explain how I could pass arguments to the default entrypoint (The project is compiled with 'sbt docker:publish' so I am not sure exactly where this would be to reassign it).
Even if this was the solution, as there are multiple actor types (and several instances of each) it appears specifying environment variables and ports in a document would require templating, creating the files at run time, and building an image for each individual actor.
I am sure I have completely missed a page somewhere which would nicely translate this docker command into the equivalent singularity, but I just can't find it.
There is no network isolation in Singularity, so there is no need to map any port. If the process inside the container binds to an IP:port, it will be immediately reachable on the host.

Define Docker container volume bindings in a configuration file?

Is there a way to define all the volume bindings either in the Dockerfile or another configuration file that I am not aware of?
Since volume bindings are used when you create a container, you can't define them in the Dockerfile (which is used to build your Docker image, not the container).
If you want a way to define the volume bindings without having to type them every time, you have the following options:
Create a script that runs the docker command and includes all of the volume options.
If you want to run more than one container, you can also use Docker Compose and define the volume bindings in the docker-compose.yaml file: https://docs.docker.com/compose/compose-file/#/volumes-volumedriver
Out of the two, I prefer Docker Compose, since it includes lots of other cool functionality, e.g. allowing you to define the port bindings, having links between containers, etc. You can do all of that in a script as well, but as soon as you use more than one container at a time for the same application (e.g. a web server container talking to a database container), Docker Compose makes a lot of sense, since you have the configuration in one place, and you can start/stop all of your containers with one single command.

How to add the Env variable to running docker container

I'm running one container with default env variables(like PORTS=1234,1235,1236) already defined from dockerfile
So with help of this while runtime, executing the script to run the naming services on defined ports
Once the container running , i want to start naming service on 1237,1238 along with existing ports, without stopping the existing container.
Let me know if anybody need more info
Please suggest the best approach
The idea behind containers are single processes that run an application and are self contained. However that case doesn't always work and you need to run multiple things in a single container, to achieve the automatic kick off of the services you should create a SCRIPT file, and use the ADD command in the docker file to get it in the system then use the ENTRYPOINT section to execute the said script.
If you are really wanting it to be at runtime instead of on startup of container you can do one of the following.
Add SSH capabilities to the container (bad idea)
Start the container with the -i "interactive" switch and the entry point as a shell environment to allow you to attach/detach to the container (not recommended).

Packaging an app in docker that can be configured at run time

I have packaged a web app I've been working on as a docker image.
I want to be able to start the image with some configuration, like this is the url of the couchdb server to use, etc.
What is the best way of supplying configuration? My app relies on env variables can I set these at run time?
In addition to setting environment variables during docker run (using -e/--env and --env-file) as you already discovered, there are other options available:
Using --link to link your container to (for instance) your couchdb server. This will work if your server is also a container (or if you use an ambassador container to another server). Linking containers will make some environment variables available, including server IP and port, that your script can use. This will work if you only need to set references to services.
Using volumes. Volumes defined in the Dockerfile can be mapped to host folders, so you can use them to access configuration files, for instance. This is useful for very complex configurations.
Extending the image. You can create a new image based on your original and ADD custom configuration files or ENV entries. This is the least flexible option but is useful in complex configuration to simplify the launching, specially when the configuration is mostly static (probably a bad idea for services/hostnames, but can work for frameworks that can be configured differently for dev/production). Can be combined with any of the above.
It seems docker supports setting env variables - should have read the manual!
docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
http://docs.docker.com/reference/commandline/cli/#run

Resources