Cloud-init to configure an Ubuntu docker container? - docker

Is it possible to use a cloud-init configuration file to define commands to be executed when a docker container is started?
I'd like to test the provisioning of an Ubuntu virtual machine using a docker container.
My idea is to provide the same cloud-init config file to an Ubuntu docker container.

No. If you want to test a VM setup, you need to use actual virtualization technology. The VM and Docker runtime environments are extremely different and you can't just substitute one technology for the other. A normal Linux VM startup will run a raft of daemons and startup scripts – systemd, crond, sshd, ifconfig, cloud-init, ... – but a Docker container will start none of these and will only run the single process in the container.
If your cloud-init script is ultimately running a docker run command, you can provide an alternate command to that container the same way you could docker run on your development system. But a Docker container won't look to places like the EC2 metadata service to find its own configuration usually, and it'd be unusual for a container to run cloud-init at all.

Related

Why minikube runs as a container itself?

While playing around with Docker and orchestration (kubernetes) I had to install and use minikube to create a simple sandbox environment. At the beginning I thought that minikube installs some kind of VM and run the "minified" kubernetes environment inside the same, however, after the installation listing my local Docker running containers I found minikube running as a container!!
Why minikube itself run as a Docker container? and how can it runs other containers?
Experimental Docker support looks to have been added in minikube 1.7.0, and started becoming the default runtime in minikube 1.9.0. As I'm writing this, current is 1.15.1.
The minikube documentation on the "docker" driver notes, particularly on a native-Linux host, there is not an intermediate virtual machine: if you can run Kubernetes in a container, it can use the entire host system's resources without special configuration or partitioning. The previous minikube-on-VirtualBox installation required preallocating memory and disk to the VM, and it was easy to get those settings wrong. Even on non-Linux hosts, if you're running Docker Desktop, sharing its hidden Linux VM can improve resource utilization, and you don't need to decide to allocate exactly 2 GB RAM to Docker Desktop and exactly 4 GB to the minikube VM.
For a long time it's been possible, but discouraged, to run a separate Docker daemon inside a Docker container; similarly, it's possible, but usually discouraged, to run a multi-process init system in a container. If you do both of these things then you can have the core Kubernetes components (etcd, apiserver, kubelet, ...) inside a single container pretending to be a Kubernetes node. It also helps here that Kubernetes already knows how to pull Docker images, which minimizes some of the confusing issues with running Docker in Docker.

Run Ansible over Docker vs VM

I want to create test environment for ansible-playbook from my PC to Linux server cluster which installs ELK on it, considering whether to run it on a container or a VM.
Obviously using docker should make the process easier and faster to facilitate, but I think there is more depth to this topic beyond the general discussion of choosing Docker versus VM, by focusing on Ansible deployments with relation to storage, networking and privileges management.
What are the considerations for running Ansible deployments on Docker container versus Virtual Machine?
I'd almost never target Ansible against a Docker container.
Ansible's model is much more suited to targeting a VM. If you have an existing Ansible playbook that's targeting a physical system or a cloud instance, a VM will be a good mirror of the operating system environment it expects, but a Docker setup will be very different.
Ansible generally expects to make an ssh connection to its target host, run a Python installed there, and its changes to be reasonably persistent. In contrast a Docker container almost never runs an ssh daemon, frequently won't have Python, and any changes that get made will be lost as soon as the container exits. A typical server-oriented Ansible playbook will do things like set up service configuration and init scripts, but in a Docker system there isn't an init and service configuration is generally injected.
It's probably better here to think of a Docker container as packaging around a single process. You can use bind mounts to inject configuration from the host, and you could use Ansible on the host to start the container, but you wouldn't use Ansible to "set up" a container. If you need software installed in a container then using Docker's native docker build system can get this done in a reproducible way, without needing additional steps after the container is started.
The one prominent exception to the "almost never" is running Molecule tests inside a container, but note that this setup does have the nature of changes being temporary and short-lived (as soon as the test is over you want to tear down the container).

Unable to connect to running docker containers (minikube docker daemon)

When I run my docker container using Docker Desktop for Windows I am able to connect to it using
docker run -p 5051:5000 my_app
http://0.0.0.0:5051
However when I open another terminal and do this
minikube docker-env | Invoke-Expression
and build and run the same container using the same run command as above
I cannot connect to the running instance.
Should I be running and testing the containers using Docker Desktop, then using minikube to store the images only (for Kubernetes)? Or can you run them and test them as well through minikube?
That's because on your second attempt, the container is not running on the host but on the minikube VM. You'll be able to access it using the minikube VM IP.
To get the minikube ip you can run minikube ip
Why ?
Invoking minikube docker-env sets all the docker env variable on your host to match the minikube environment. This means that when you run a container after that, it is run with the docker daemon on the minikube VM.
I asked you if there are any specific reasons to use Docker Desktop and Minikube together on a single machine as these are two competitive solutions which basically enable you to perform similar tasks and achieve same goals.
This article nicely explains differences between these two tools.
Docker-for-windows uses Type-1 hypervisor, such as Hyper-V, which are
better compared to Type-2 hypervisors, such as VirtualBox, while
Minikube supports both hypervisors. Unfortunately, there are a couple
of limitations in which technology you are using, since you cannot
have Type-1 or Type-2 hypervisors running at the same time on your
machine
If you use Docker Desktop and Minikube at the same time I assume you're using Type-1 hypervisor, such as mentioned Hyper-V, but keep in mind that even if they use the same hypervisor, both tools create their own instances of virtual machine. Basically you are not supposed to use those two tools together expecting that they will work as a kind of hybrid that lets you manage single container environment.
First check what hypervisor you are using exactly. If you're using Hyper-V, simple Get-VM command in Powershell (more details in this article) should tell you what you currently have.
#mario no, I didn't know minikube had a docker daemon until recently
which is why I have both
Yes, Minikube has built in docker environment (in fact it sets everything up, but yes, it also sets up container runtime) so basically you don't need to install docker additionally, and as #Marc ABOUCHACRA already suggested in his answer, Minikube runs the whole environment (single node k8s cluster with docker runtime) on a separate VM. Linux version has an option --vm-driver=none which allows you to use your host container runtime and set-up k8s components on it, but this is not the case with Windows version - here you can only use one of two currently supported hypervisors: Hyper-V or VirtualBox (ref).
I wouldn't say that Docker Destkop runs everything on your host. It also uses Type-1 hypervisor to run the container runtime environment. Please check the Get-VM command on your computer and it should be clear what VMs you have and created by which tool.

How to use Molecule inside a VM to test an Ansible role that installs Docker

I have an Ansible role that among other things installs Docker and starts the docker daemon in a CentOS environment. I would like to use Molecule to test it, but as my workstation is a Windows PC I have to run Molecule from a VirtualBox VM. At least theoretically my options are:
Use Molecule's Vagrant driver and run a VM inside my VM
Use Molecule's Docker driver and have a docker container which starts the docker daemon.
As far as I can tell the first option is not really possible with VirtualBox; is there a way to achieve the second one? I searched around, but all the posts I found concerned running Molecule itself from within a container rather than the setup I described.
If I try to use a default Molecule scenario systemctl fails to start the docker daemon.

Access docker within Dockerfile?

I would like to run integration test while I'm building docker image. Those tests need to instantiate docker containers.
Is there a possibility to access docker inside such multi stage docker build?
No, you can't do this.
You need access to your host's Docker socket somehow. In a standalone docker run command you'd do something like docker run -v /var/run/docker.sock:/var/run/docker.sock, but there's no way to pass that option (or any other volume mount) into docker build.
For running unit-type tests (that don't have external dependencies) I'd just run them in your development or core CI build environment, outside of Docker, and run run docker build until they pass. For integration-type tests (that do) you need to set up those dependencies, maybe with a Docker Compose file, which again will be easier to do outside of Docker. This also avoids needing to build your test code and its additional dependencies into your image.
(Technically there are two ways around this. The easier of the two is the massive security disaster that is opening up a TCP-based Docker socket; then your Dockerfile could connect to that ["remote"] Docker daemon and launch containers, stop them, kill itself off, impersonate the host for inbound SSH connections, launch a bitcoin miner that lives beyond the container build, etc...actually it allows any process on the host to do any of these things. The much harder, as #RaynalGobel suggests in a comment, is to try to launch a separate Docker daemon inside the container; the DinD image link there points out that it requires a --privileged container, which again you can't have at build time.)

Resources