Using same Docker machine across different client devices - docker

We want to set up a Docker development node where anybody in our team can deploy things to.
I created a new Docker machine using SSH, like this:
docker-machine create \
--driver generic \
--generic-ip-address=xxx.xxx.xxx.xxx \
--generic-ssh-user=myuser \
mymachine
Using docker-machine env mymachine, I set up my environment. But what steps does another developer need to perform to have access to the same machine?
Unfortunately, there is not anything like docker-machine add ... (https://github.com/docker/machine/issues/3212)
What's the easiest and the current Docker'ic way of achieving it?
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://xxx.xxx.xxx.xxx:2376"
export DOCKER_CERT_PATH="/Users/user/.docker/machine/machines/mymachine"
export DOCKER_MACHINE_NAME="mymachine"
But what about with the certs? Copy the same certs over or generate new ones for him?

In my experience, development docker workflows are much more pleasant when run locally. You can mount your file system for quick iteration. And when building images, the time to copy context is much reduced. Plus when installing the docker command-line, your team may install docker engine as well.
But I get that you might want to prove out docker without asking folks to maintain a VM or install locally - so on to actual answers:
What steps does another developer need to perform to have access to the same machine?
Install docker.
Set host + certificate in the environment.
The environment variables from docker-machine env (and the files referenced there) would be enough. Though that still leaves you the issue of copying the certificates around - as discussed in your github link.
Copy the same certs over or generate new ones?
(Based on the tls configuration) I believe a docker daemon can only support one set of certs.
What's the easiest and the current Docker'ic way of achieving [a shared machine]?
The certificate is there for your security, but it can be disabled. If you're confident in your local network security, and using the service for development - you can have the host expose an http port.
That can be done via docker-machine at create time: (example from this question: boot2docker without tls verification)
docker-machine create -d virtualbox --engine-env DOCKER_TLS=no --engine-opt host=tcp://0.0.0.0:2375 node1
Once the service is exposed on a tcp port with TLS disabled, anyone can access it from the docker command line with the -H flag.
docker -H xxx.xxx.xxx.xxx:2375 images
Setting the DOCKER_HOST environment variable will save some typing.

Related

Keycloak Docker image basic unix commands not available

I have setup my Keycloak identification server by running a .yml file that uses the docker image jboss/keycloak:9.0.0.
Now I want get inside the container and modify some files in order to make some testing.
Unfortunately, after I got inside the running container, I realized that some very basic UNIX commands like sudo or vi (and many many more) aren't found (as well as commands like apt-get or yum which I used to download command packages and failed).
According to this question, it seems that the underlying OS of the container (Redhat Universal Base Image) uses the command microdnf to manage software, but unfortunately when I tried to use this command to do any action I got the following message:
error: Failed to create: /var/cache/yum/metadata
Could you please propose any workaround for my case? I just need to use a text editor command like vi, and root privileges for my user (so commands like sudo, su, or chmod). Thanks in advance.
If you still, for some reason, want to exec in to the container try adding --user root to you docker exec command.
Just exec:ing in to the container without the --user will do so as user jboss, that user seems to have less privileges.
It looks like you are trying to use approach from non docker (old school) world in the docker world. That's not right. Usually, you don't have need to go to the container and edit any config file there - that change will be very likely lost (it depends on the container configuration). Containers are configured via environment variables or volumes usually.
Example how to use TLS certificates: Keycloak Docker HTTPS required
https://hub.docker.com/r/jboss/keycloak/ is also good starting point to check available environment variable, which may help you achieve what you need. For example PROXY_ADDRESS_FORWARDING=true enable option, when you can run Keycloak container behind a loadbalancer without you touching any config file.
I would say also adding own config files on the build is not the best option - you will have to maintain your own image. Just use volumes and "override" default config file(s) in the container with your own config file(s) from the host OS file system, e.g.:
-v /host-os-path/my-custom-standalone-ha.xml:/opt/jboss/keycloak/standalone/configuration/standalone-ha.xml

How to run container in a remote docker host with Jenkins

I have two servers:
Server A: Build server with Jenkins and Docker installed.
Server B: Production server with Docker installed.
I want to build a Docker image in Server A, and then run the corresponding container in Server B. The question is then:
What's the recommended way of running a container in Server B from Server A, once Jenkins is done with the docker build? Do I have to push the image to Docker hub to pull it in Server B, or can I somehow transfer the image directly?
I'm really not looking for specific Jenkins plugins or stuff, but rather, from a security and architecture standpoint, what's the best approach to accomplish this?
I've read a ton of posts and SO answers about this and have come to realize that there are plenty of ways to do it, but I'm still unsure what's the ultimate, most common way to do this. I've seen these alternatives:
Using docker-machine
Using Docker Restful Remote API
Using plain ssh root#server.b "docker run ..."
Using Docker Swarm (I'm super noob so I'm still unsure if this is even an option for my use case)
Edit:
I run Servers A and B in Digital Ocean.
Docker image can be saved to a regular tar archive:
docker image save -o <FILE> <IMAGE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_save/
Then scp this tar archive to another host, and run docker load to load the image:
docker image load -i <FILE>
Docs here: https://docs.docker.com/engine/reference/commandline/image_load/
This save-scp-load method is rarely used. The common approach is to set up a private Docker registry behind your firewall. And push images to or pull from that private registry. This doc describes how to deploy a container registry. Or you can choose registry service provided by a third party, such as Gitlab's container registry.
When using Docker repositories, you only push/pull the layers which have been changed.
You can use Docker REST API. Jenkins HTTP Request plugin can be used to make HTTP requests. You can run Docker commands directly on a remote Docker host setting the DOCKER_HOST environment variable. To export an the environment variable to the current shell:
export DOCKER_HOST="tcp://your-remote-server.org:2375"
Please be aware of the security concerns when allowing TCP traffic. More info.
Another method is to use SSH Agent Plugin in Jenkins.

How do you handle nontrivial environment differences with docker?

I recognize that docker is intended to reduce the friction of moving an application from one environment to another, and in many cases doing things like overriding environment variables is pretty easy at runtime.
Consider a situation where all development happens behind a corporate proxy, but then the images (or containers or Dockerfiles) need to be shipped to a different environment which has different architecture requirements. The specific case I'm thinking of is that the development environment includes a pretty invasive corporate proxy. The image needs (in order to function) the ability to hit services on the internet, so the working Dockerfile looks something like this in development:
FROM centos
ENV http_proxy=my.proxy.url \
https_proxy=my.proxy.url \
# these lines required for the proxy to be trusted, most apps block it otherwise b/c SSL inspection
COPY ./certs/*.pem /etc/pki/ca-trust/source/anchors/
RUN /usr/bin/bupdate-ca-trust extract
## more stuff to actually run the app, etc
In the production environment, there is no proxy and no need to extract pem files. I recognize that I can set the environment variables to not use the proxy at runtime (or conversely, set them only during development), but either way this feels pretty leaky to me in terms of the quasi-encapsulation I expect from Docker.
I recognize as well that this particular example, it's not that big a deal to copy and extract the pem files that won't be used in production, but it made me wonder about best practices in this space, as I'm sure this isn't the only example.
Ideally I would like to let the host machine manage the proxy requirements (and really, any environment differences), but I haven't been able to find a way to do that except by modifying environment variables.
You might be able to use iptables on your development machine to proxy traffic from containers to a proxy. Then your image would be the same in each environment it runs in, the network differences would be managed by the host. See http://silarsis.blogspot.nl/2014/03/proxy-all-containers.html for more information.
IMO I wouldn't worry too much about it if it works. Image still runs in every environment so you're not really "giving something up" other than semantics :)
You can probably configure this at the Docker Engine level, using the instruction at: https://docs.docker.com/engine/admin/systemd/#httphttps-proxy
Create a systemd drop-in directory for the docker service:
$ mkdir -p /etc/systemd/system/docker.service.d
Create a file called /etc/systemd/system/docker.service.d/http-proxy.conf that adds the HTTP_PROXY environment variable:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/"
Or, if you are behind an HTTPS proxy server, create a file called
/etc/systemd/system/docker.service.d/https-proxy.conf that adds the
HTTPS_PROXY environment variable:
[Service]
Environment="HTTPS_PROXY=https://proxy.example.com:443/"
If you have internal Docker registries that you need to contact without
proxying you can specify them via the NO_PROXY environment variable:
Environment="HTTP_PROXY=http://proxy.example.com:80/"
"NO_PROXY=localhost,127.0.0.1,docker-registry.somecorporation.com"
Or, if you are behind an HTTPS proxy server:
Environment="HTTPS_PROXY=https://proxy.example.com:443/"
"NO_PROXY=localhost,127.0.0.1,docker-registry.somecorporation.com"
Flush changes:
$ sudo systemctl daemon-reload Restart Docker:
$ sudo systemctl restart docker
Verify that the configuration has been loaded:
$ systemctl show --property=Environment docker
Environment=HTTP_PROXY=http://proxy.example.com:80/
Or, if you are behind an HTTPS proxy server:
$ systemctl show --property=Environment docker
Environment=HTTPS_PROXY=https://proxy.example.com:443/

Docker - running commands from all containers

I'm using docker compose to create basic environment for my websites (at the moment only locally so I don't care about security issues). At the moment I'm using 3 different containers"
for nginx
for php
for mysql
I can obviously log in to any container to run commands. For example I can ssh to php container to verify PHP version or run PHP script but the question is - is it possible to have such configuration that I could run commands from all containers running for example one SSH container?
For example I would like to run commands like this:
php -v
nginx restart
mysql
after logging to one common SSH for all services.
Is it possible at all? I know there is exec command so I could add before each command name of container but it won't be flexible enough to use and in case of more containers it would be more and more difficult.
So the question is - is it possible at all and if yes, how could it be achieved?
Your question was:
Is it possible at all?
and the answer is:
No
This is due to the two restrictions you are giving in combination. Your first restrictions is:
Use SSH not Exec
It is definitly possible to have an SSH daemon running in each container and setup the security so that you can run ssh commands in e.g. a passwordless mode
see e.g. Passwordless SSH login
Your second restriction is:
one common SSH for all services
and this would now be the tricky part. You'd have to:
create one common ssh server in e.g. one special container for this purpose or using one of the containers
create communication to or between containers
make sure that the ssh server knows which command is for which container
All in all this would be so complicated in comparison to a simple bash or python script that can do the same with exec commands that in all the "no" is IMHO a better answer than trying to solve the academic problem of "might there be some tricky/fancy solution of doing this".

What's the fastest way to migrate from boot2docker to Vagrant+NFS on Mac OS X?

I have a database container built from the official mysql docker pull mysql.
I have a front-end app app built with Cake.
I have a back-end app cms built with Symfony.
I have container linking set up for both app and cms to start and connect automatically to db.
Everything works great but it's super slow with boot2docker.
I've been trying to understand how to use Vagrant with NFS.
There's a few different tutorials and examples online, but so far I've been unable to get going. I have installed the latest Vagrant and used the example yungsang/boot2docker but when I try the simplest command docker images I keep getting errors like FATA[0000] An error occurred trying to connect: Get https://localhost:2375/v1.16/images/json: tls: oversized record received with length 20527.
I discovered that if I vagrant ssh into the VM, I can run docker images and such, but that's not what I wanted; I am used to running docker commands straight from the Mac OS X terminal. So clearly I've misunderstood something. Also the tutorials on the Vagrant blog use rsync and --provider=docker which also doesn't seem necessary to use the yungsang/boot2docker vagrant box.
I would be grateful for some guidance and feel like I exhausted my Google search capabilities on this one.
Refs:
https://www.vagrantup.com/blog/feature-preview-vagrant-1-6-docker-dev-environments.html
https://github.com/boot2docker/boot2docker/issues/64
https://vagrantcloud.com/yungsang/boxes/boot2docker
Update [2015-02-11]
To answer the broader question (the one in the title) I've created a repo on Github with a Vagrantfile which will let you start with Vagrant+Docker+NFS on MacOS quickly and easily.
https://github.com/blinkreaction/boot2docker-vagrant
Original answer to the "tls: oversized record received" issue [2015-02-10]
The issue
Check your environment variables. You most likely have a mix of boot2docker shellinit and your custom DOCKER_HOST variables there. E.g.:
$ env|grep DOCKER
DOCKER_HOST=tcp://localhost:2375
DOCKER_CERT_PATH=/Users/<user>/.boot2docker/certs/boot2docker-vm
DOCKER_TLS_VERIFY=1
The reason you got here is first $(boot2docker shellinit) exported something like this to point the docker client to the boot2docker VM:
DOCKER_HOST=tcp://192.168.59.103:2376
DOCKER_CERT_PATH=/Users/<user>/.boot2docker/certs/boot2docker-vm
DOCKER_TLS_VERIFY=1
Then you pointed your docker client to the custom VM mapped port with
export DOCKER_HOST=tcp://localhost:2375
How to fix
Short term
unset DOCKER_TLS_VERIFY
Long term
Either get rid of the $(boot2docker shellinit) in your .bashrc, .zshrc, etc. file and execute it manually when needed or have it in the following order there:
# Docker (default for Vagrant based boxes)
export DOCKER_HOST=tcp://localhost:2375
# boot2docker shellinit
$(boot2docker shellinit)
This way if boot2docker is NOT running, your DOCKER_HOST will default to tcp://localhost:2375.
Otherwise $(boot2docker shellinit) will overwrite the variables and set DOCKER_HOST to point to the boot2docker VM.

Resources