How do you handle nontrivial environment differences with docker? - docker

I recognize that docker is intended to reduce the friction of moving an application from one environment to another, and in many cases doing things like overriding environment variables is pretty easy at runtime.
Consider a situation where all development happens behind a corporate proxy, but then the images (or containers or Dockerfiles) need to be shipped to a different environment which has different architecture requirements. The specific case I'm thinking of is that the development environment includes a pretty invasive corporate proxy. The image needs (in order to function) the ability to hit services on the internet, so the working Dockerfile looks something like this in development:
FROM centos
ENV http_proxy=my.proxy.url \
https_proxy=my.proxy.url \
# these lines required for the proxy to be trusted, most apps block it otherwise b/c SSL inspection
COPY ./certs/*.pem /etc/pki/ca-trust/source/anchors/
RUN /usr/bin/bupdate-ca-trust extract
## more stuff to actually run the app, etc
In the production environment, there is no proxy and no need to extract pem files. I recognize that I can set the environment variables to not use the proxy at runtime (or conversely, set them only during development), but either way this feels pretty leaky to me in terms of the quasi-encapsulation I expect from Docker.
I recognize as well that this particular example, it's not that big a deal to copy and extract the pem files that won't be used in production, but it made me wonder about best practices in this space, as I'm sure this isn't the only example.
Ideally I would like to let the host machine manage the proxy requirements (and really, any environment differences), but I haven't been able to find a way to do that except by modifying environment variables.

You might be able to use iptables on your development machine to proxy traffic from containers to a proxy. Then your image would be the same in each environment it runs in, the network differences would be managed by the host. See http://silarsis.blogspot.nl/2014/03/proxy-all-containers.html for more information.
IMO I wouldn't worry too much about it if it works. Image still runs in every environment so you're not really "giving something up" other than semantics :)

You can probably configure this at the Docker Engine level, using the instruction at: https://docs.docker.com/engine/admin/systemd/#httphttps-proxy
Create a systemd drop-in directory for the docker service:
$ mkdir -p /etc/systemd/system/docker.service.d
Create a file called /etc/systemd/system/docker.service.d/http-proxy.conf that adds the HTTP_PROXY environment variable:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/"
Or, if you are behind an HTTPS proxy server, create a file called
/etc/systemd/system/docker.service.d/https-proxy.conf that adds the
HTTPS_PROXY environment variable:
[Service]
Environment="HTTPS_PROXY=https://proxy.example.com:443/"
If you have internal Docker registries that you need to contact without
proxying you can specify them via the NO_PROXY environment variable:
Environment="HTTP_PROXY=http://proxy.example.com:80/"
"NO_PROXY=localhost,127.0.0.1,docker-registry.somecorporation.com"
Or, if you are behind an HTTPS proxy server:
Environment="HTTPS_PROXY=https://proxy.example.com:443/"
"NO_PROXY=localhost,127.0.0.1,docker-registry.somecorporation.com"
Flush changes:
$ sudo systemctl daemon-reload Restart Docker:
$ sudo systemctl restart docker
Verify that the configuration has been loaded:
$ systemctl show --property=Environment docker
Environment=HTTP_PROXY=http://proxy.example.com:80/
Or, if you are behind an HTTPS proxy server:
$ systemctl show --property=Environment docker
Environment=HTTPS_PROXY=https://proxy.example.com:443/

Related

Keycloak Docker image basic unix commands not available

I have setup my Keycloak identification server by running a .yml file that uses the docker image jboss/keycloak:9.0.0.
Now I want get inside the container and modify some files in order to make some testing.
Unfortunately, after I got inside the running container, I realized that some very basic UNIX commands like sudo or vi (and many many more) aren't found (as well as commands like apt-get or yum which I used to download command packages and failed).
According to this question, it seems that the underlying OS of the container (Redhat Universal Base Image) uses the command microdnf to manage software, but unfortunately when I tried to use this command to do any action I got the following message:
error: Failed to create: /var/cache/yum/metadata
Could you please propose any workaround for my case? I just need to use a text editor command like vi, and root privileges for my user (so commands like sudo, su, or chmod). Thanks in advance.
If you still, for some reason, want to exec in to the container try adding --user root to you docker exec command.
Just exec:ing in to the container without the --user will do so as user jboss, that user seems to have less privileges.
It looks like you are trying to use approach from non docker (old school) world in the docker world. That's not right. Usually, you don't have need to go to the container and edit any config file there - that change will be very likely lost (it depends on the container configuration). Containers are configured via environment variables or volumes usually.
Example how to use TLS certificates: Keycloak Docker HTTPS required
https://hub.docker.com/r/jboss/keycloak/ is also good starting point to check available environment variable, which may help you achieve what you need. For example PROXY_ADDRESS_FORWARDING=true enable option, when you can run Keycloak container behind a loadbalancer without you touching any config file.
I would say also adding own config files on the build is not the best option - you will have to maintain your own image. Just use volumes and "override" default config file(s) in the container with your own config file(s) from the host OS file system, e.g.:
-v /host-os-path/my-custom-standalone-ha.xml:/opt/jboss/keycloak/standalone/configuration/standalone-ha.xml

Set permanent docker build --build-arg value for my environment

Working behind a corporate proxy - I need to build my docker images like
docker build --build-arg http_proxy=http://my.proxy:80 .
and that's fine.
I have a script that I've checked out that does a bunch of docker builds - and that's fail because it's not reaching the proxy.
Is there a way to set my local environment to always use my proxy settings when doing docker build?
I did look at creating an alias - but that seems a bit gnarly giving there's a space between the commands? Is there a simple global config I can modify?
First of all make sure to configure the http_proxy setting for the docker deamon as described in HTTP/HTTPS proxy.
This configuration should be enough for docker to pick it up and use it when building the image. However, If the internal commands that the dockerfile is running are creating some custom connections, this configuration may not be picked up properly.
The proxy settings can be picked up from docker info:
$ docker info | grep Proxy
Http Proxy: http://localhost:3128
Https Proxy: http://localhost:3128
You can use the values provided picked up by docker info.
However, what I recommend is to install a tool to transparently route all the traffic to the http proxy. That way you can forget about the proxy and all tools on your machine should work seemlessly.
If you are on linux, there is redsocks. The is also a docker image for it if you don't want to install it directly on the machine. For other platforms you can use proxycap.

Using same Docker machine across different client devices

We want to set up a Docker development node where anybody in our team can deploy things to.
I created a new Docker machine using SSH, like this:
docker-machine create \
--driver generic \
--generic-ip-address=xxx.xxx.xxx.xxx \
--generic-ssh-user=myuser \
mymachine
Using docker-machine env mymachine, I set up my environment. But what steps does another developer need to perform to have access to the same machine?
Unfortunately, there is not anything like docker-machine add ... (https://github.com/docker/machine/issues/3212)
What's the easiest and the current Docker'ic way of achieving it?
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://xxx.xxx.xxx.xxx:2376"
export DOCKER_CERT_PATH="/Users/user/.docker/machine/machines/mymachine"
export DOCKER_MACHINE_NAME="mymachine"
But what about with the certs? Copy the same certs over or generate new ones for him?
In my experience, development docker workflows are much more pleasant when run locally. You can mount your file system for quick iteration. And when building images, the time to copy context is much reduced. Plus when installing the docker command-line, your team may install docker engine as well.
But I get that you might want to prove out docker without asking folks to maintain a VM or install locally - so on to actual answers:
What steps does another developer need to perform to have access to the same machine?
Install docker.
Set host + certificate in the environment.
The environment variables from docker-machine env (and the files referenced there) would be enough. Though that still leaves you the issue of copying the certificates around - as discussed in your github link.
Copy the same certs over or generate new ones?
(Based on the tls configuration) I believe a docker daemon can only support one set of certs.
What's the easiest and the current Docker'ic way of achieving [a shared machine]?
The certificate is there for your security, but it can be disabled. If you're confident in your local network security, and using the service for development - you can have the host expose an http port.
That can be done via docker-machine at create time: (example from this question: boot2docker without tls verification)
docker-machine create -d virtualbox --engine-env DOCKER_TLS=no --engine-opt host=tcp://0.0.0.0:2375 node1
Once the service is exposed on a tcp port with TLS disabled, anyone can access it from the docker command line with the -H flag.
docker -H xxx.xxx.xxx.xxx:2375 images
Setting the DOCKER_HOST environment variable will save some typing.

How to dynamically set environment variables of linked containers?

I have two containers webinterface and db, while webinterface is started using the --link option (for db) which generates the environment variables
DB_PORT_1111_TCP=tcp://172.17.0.5:5432
DB_PORT_1111_TCP_PROTO=tcp
DB_PORT_1111_TCP_PORT=1111
DB_PORT_1111_TCP_ADDR=172.17.0.5
...
Now my webinterface container uses a Dockerfile where some static environment variables are defined to define the connection:
ENV DB_HOST localhost
ENV DB_PORT 2222
Knowing that there is also an -e option for docker run, the problem is that I want to use those variables in the Dockerfile (used in some scripts) but overwrite them with the values generated with the --link option, i.e. something like:
docker run -d -e DB_HOST=$DB_PORT_1111_TCP_ADDR
This would use the host's defined environment variable which doesn't work here.
Is there a way to handle this?
This is a variable expansion issue so to resolve try the following:
docker run -d -e DB_HOST="$DB_PORT"_1111_TCP_ADDR
With a Unix process that is already running, its environment variables can only be changed from inside the process, not from the outside, so their are somewhat non-dynamic by nature.
If you find Docker links limiting, you are not the only person out there. One simple solution to this would be using WeaveDNS. With WeaveDNS you can simply use default ports (as with Weave overlay network there is no need to expose/publish/remap any internal ports) and resolve each component by via DNS (i.e. your app would just need to look for db.weave.local, and doesn't need to be aware of clunky environment variable scheme that Docker links present). To get a better idea of how WeaveDNS works, checkout one of the official getting started guides. WeaveDNS effectively gives you service discovery without having to modify the application you have.

Packaging an app in docker that can be configured at run time

I have packaged a web app I've been working on as a docker image.
I want to be able to start the image with some configuration, like this is the url of the couchdb server to use, etc.
What is the best way of supplying configuration? My app relies on env variables can I set these at run time?
In addition to setting environment variables during docker run (using -e/--env and --env-file) as you already discovered, there are other options available:
Using --link to link your container to (for instance) your couchdb server. This will work if your server is also a container (or if you use an ambassador container to another server). Linking containers will make some environment variables available, including server IP and port, that your script can use. This will work if you only need to set references to services.
Using volumes. Volumes defined in the Dockerfile can be mapped to host folders, so you can use them to access configuration files, for instance. This is useful for very complex configurations.
Extending the image. You can create a new image based on your original and ADD custom configuration files or ENV entries. This is the least flexible option but is useful in complex configuration to simplify the launching, specially when the configuration is mostly static (probably a bad idea for services/hostnames, but can work for frameworks that can be configured differently for dev/production). Can be combined with any of the above.
It seems docker supports setting env variables - should have read the manual!
docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
http://docs.docker.com/reference/commandline/cli/#run

Resources