Docker in Docker and AWS CLI for Windows Containers - docker

I'm trying to migrate .NET legacy application to AWS ECS/Fargate. I'm following this article that explains how to create a custom Windows Docker image with MSBuild tools used in AWS CodePipeline/CodeBuild project. I also need to be able to install a Docker deamon and AWS CLIV2 into that custom image so that I could execute docker and AWS CLI commands in buildspec.yaml file in CodeBuild. So far I've been able to use this code in my custom image Dockerfile which installs Docker in Docker but the Docker service never gets started even though it understands docker --version command. I was also trying to modify this PowerShell script to install AWS CLI but also stuck with having little to no progress.
I'd appreciate any help in installing Docker in Docker and AWS CLI.

When I had to use docker in docker, I instead used the host docker socket by mounting that in the container.
I had to mount 2 files in linux.
/usr/bin/docker (executable)
/var/run/docker.sock (service socket)
Update - Above would work for linux, for windows, a double slash is required. Below socket would have to be mounted for windows. I couldn't personally test as I don't have windows.
"\\.\pipe\docker_engine:\\.\pipe\docker_engine"
I found a very good GUI tool explained this
Ref: https://tomgregory.com/running-docker-in-docker-on-windows/

Related

VSCode in-container-debugging over SSH machine

I am trying to setup and advanced configuration with VScode insider and I ma facing an issue.
My setup is:
VSCode Running in my local machine Windows 10 with a Django source code hosted on my machine. I have no docker client on this machine, and I don't wont to install one...
A virtual machine with Ubuntu is running a docker daemon, docker client and docker-compose. My workspace is shared over vboxfs and mounted on my Ubuntu
A python docker container is running in the Ubuntu machine and running the mounted code.
I tried to use the Remote Extension to debug the python code inside the container. However, when a run my vscode inside on the remote SSH Taget (so the ubuntu machine), I am able to manage docker objects (images, containers, etc...) using the Docker extension of vscode, but I can't see the option: Remote-Containers: Open Folder in Container. It's not found in the F1 command... I can see the other related command like: Remote-Containers: Settings.
Do you have any idea ? Or my setup is not supported by the extension ? It seems like it supports SSH development or Container development but not mixing both together, right ?
Is there any other VSCode config to debug in my targeted setup ?
Regards

Windows based couchbase image for docker

Is windows based image for "couchbase" available to install with docker? or any way around so that couchbase can be installed with docker in windows container.
The images are based always on Linux. You know an image is based on other image recursively till reach a base image like ubuntu, debian or whatever. Anyway, it is suppossed they are not related to the host O.S. They can be run on a Docker using a Windows host, Linux host or OSX host in the same way. On Windows or OSX you can install Docker to run container based on Linux images, there is no problem about that.
Depending of the use of the container, if it needs some hardware to be useful (like wireless cards or something like that), then the host is important because there are drivers and kernel directly involved. But usually, any image can be used to run containers independently of the Docker host.
As of today (2017-10-02), I don't think there is an official Couchbase Docker image for Windows container. Their Dockerfile shows that their images are built off of Ubuntu.
You can try setting everything up manually by following the steps below. (Note that installing via Chocolatey is just a convenience. You can choose another method.)
Get Windows Server Core image.
host> docker pull microsoft/windowsservercore
Start the container in interactive mode
host> docker run -it --name couchbase-on-windows microsoft/windowsservercore
Switch to PowerShell
container-cmd> PowerShell
Install Chocolatey
container-ps> Set-ExecutionPolicy Bypass; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
Install Couchbase. (The version on chocolatey might be a bit behind the latest.)
container-ps> choco install couchbase-server-community
If all this works to your liking, then you can create a Dockerfile to make your own Docker image. You can see what this person did to create an image for Redis on Windows. Here's his Dockerfile.

Docker in docker and docker compose block one port for no reason

Right now I am setting up an application that has a deployment based upon docker images.
I use gitlab ci to:
Test each service
Build each service
Dockerize each image (create docker container)
Run integration tests (start docker compose that starts all services on special ports, run integration tests)
Stop prod images and run new images
I did this for each service, but I ran into an issue.
When I start my docker container for integration tests then it is setup within a gitlab ci task. For each task a docker based runner is used. I also mount my host docker socket to be able to use docker in docker.
So my gradle docker image is started by the gitlab runner. Then docker will be installed and all images will be started using docker compose.
One microservice listens to port 10004. Within the docker compose file there is a 11004:10004 port mapping.
My integration tests try to connect to port 11004. But this does not work right now.
When I attach to the image that run docker compose while it tries to execute the integration test then I am not able to do it manually by calling
wget ip: port
I just get the message connected and waiting for response. Either my tests can connect successfully. My service does not log any message about a new connection.
When I execute this wget command within  my host shell then it works.
It's a public ip and within my container I can also connect to other ports using telnet and wget. Just one port of one service is broken when I try to connect from my docker in docker instance.
When I do not use docker compose then it works. Docker compose seems to setup a special default network that does something weird.
Setting network to host also works...
So did anyone also make such an experience when using docker compose?
The same setup works flawless in docker for mac, but my server runs on Debian 8.
My solution for now is to use a shell runner to avoid docker in docker issues. It works there as well.
So docker in docker combined with docker compose seems to have an ugly bug.
I'm writing while I am sitting in the subway but I hope describing my issue is also sufficient to talk about experiences. I don't think we need some sourcecode to find bad configurations because it works without docker in docker and on Mac.
I figured out that docker in docker has still some weird behaviors. I fixed my issue by adding a new gitlab ci runner that is a shell runner. Therefore docker-compose is run on my host and everything works flawless.
I can reuse the same runner for starting docker images in production as I do for integration testing. So the easy fix has another benefit for me.
The result is a best practice to avoid pitfalls:
Only use docker in docker when there is a real need.
For example to make sure fast io communication between your host docker image and your docker image of interest.
Have fun using docker (in docker (in docker)) :]

Docker-Compose with Docker 1.12 "Swarm Mode"

Does anyone know how (if possible) to run docker-compose commands against a swarm using the new docker 1.12 'swarm mode' swarm?
I know with the previous 'Docker Swarm' you could run docker-compose commands directly against the swarm by updating the DOCKER_HOST to point to the swarm master :
export DOCKER_HOST="tcp://123.123.123.123:3375"
and then simply execute commands as if you were running them against a single instance of Docker engine.
OR is this functionality something that docker-compose bundle is replacing?
I realized my question was vaguely worded and actually has two parts to it. Eventually however, I was able to figure out solutions to both issues.
1) Can you run commands directly 'against' a swarm / swarm-mode in Docker 1.12 running on a remote machine?
While you can't really run commands 'against' a swarm you CAN run docker service commands on the master node of a swarm in order to run services on that swarm.
You can also configure the Docker daemon (the docker daemon that is the master node of the swarm) to listen on TCP ports in order to externally expose the Docker API.
2) Can you still use docker-compose files to start services in Docker 1.12 swarm-mode?
Yes, although these features are currently part of Docker's "experimental" features. This means you must download/install the version that includes the experimental features (check the github).
You essentially follow these instructions https://github.com/docker/docker/blob/master/experimental/docker-stacks-and-bundles.md
to go from the docker-compose.yml file to a distributed application bundle and then to an application stack (this is when your services are actually run).
$ docker-compose bundle
$ docker deploy [OPTIONS] STACK
Here's what I did:
On my remote swarm manager node I started docker with the following options:
docker daemon -D -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375 &
This configures Docker daemon to listen on the standard docker socket unix:///var/run/docker.sock AND on localhost:2375.
WARNING : I'm not enabling TLS here just for simplicity
On my local machine I update the docker host environment variable to point at my swarm master node.
$ export DOCKER_HOST="tcp://XX.XX.XX.XX:2377" (populate with your IP)
Navigate to the directory of my docker-compose.yml file
Create a bundle file from my docker-compose.yml file. Make sure to include the .dab extension.
docker-compose bundle --fetch-digests -o myNewBundleFile.dab
Create an application stack from the bundle file. Do not specify the .dab extension here.
$ docker deploy myNewBundleFile
Now I'm still experiencing some networking related issues but I have successfully gotten my service up and running from my unmodified docker-compose.yml files. The network issues I'm experiencing is documented here : https://github.com/docker/docker/issues/23901
While the official support for Swarm mode in Docker Compose is still in progress, I've created a simple script that takes docker-compose.yml file and runs docker service commands for you. See https://github.com/ddrozdov/docker-compose-swarm-mode for details.
It is not possible. Compose uses containers to create a client-side concept of a service. Docker 1.12 Swarm mode introduces a new server-side concept of a service.
You are correct that docker-compose bundle; docker stack deploy is the way to get a Compose file running in Swarm Mode.

Using commands in bluemix user interface

I need to use docker container in bluemix but my laptop does not support docker so I can't use the commands to run docker in bluemix using the CLI plug-ins.
Is there any other way to do this?
Why can't you run it on your laptop? Docker can run in some flavor on most operating systems (albeit within a VM on some).
You have a number of options though:
Run it inside a linux virtual machine locally
Run it inside a cloud linux virtual machine
Run it inside a cloud container - Yes, you can actually run Docker inside a Docker container.
Install a linux OS as a dual boot option on your laptop and run Docker there.
Edit: formatting
which OS does your notebook run?
Docker supports Linux, OSX and Windows as well, and you could choose to use cf container plugin (cf ic), docker or also ice client.
Here you could find Bluemix documentation related to container

Resources