combining cloudbees ec2 docker image with docker in docker - jenkins

Am trying to combine docker in docker feature with in cloudbees ecs image.
Both the images are build using different linux based distribution.
Cloudbees ECS slave image is build use base ubuntu 14.04 and docker:1.8-dind is build from base debian:jessie. What is the best way to combine both into one docker image with both features using debian:jessie as the base.

I've done something similar in the past and it usually comes down to walking the Dockerfile dependency chain and building up the image that way. In your instance, you'd probably want to start at https://hub.docker.com/r/cloudbees/java-build-tools/~/dockerfile/ and swap out
FROM ubuntu:15.04
with
FROM debian:jessie
And build it to see what works and what doesn't work. Typically it's a package manager or something that needs to be updated/replaced.
The downside of this approach is that it can be a lot of trial and error and you end up with giant Dockerfiles, but the upside is that you can usually streamline your image to do exactly what you want without a lot of the Ubuntu extras.

Related

what is "docker container"?

I understand docker engine sits on top of docker host (which is OS) and docker engine pull docker/container images from docker hub (or any other repo). Docker engine interact with OS to configure and set up container out of image pulled as part of "Docker Run" command.
However I quite often also came across term "Docker Container". Is this some another tool and what is its role in entire architecture ? I know there is windows container or linux containers for respective docker host..but what is it Docker Container itself ? Is it something people use loosely to simply refer to container in general ?
In simple words, when you execute a docker image, it will spawn a docker container.
You can relate it to Java class(as docker image), and when we initialize a class it will create an object(docker container).
So docker container is an executable form of a docker image. You can have multiple Docker containers from a single docker image.
A docker container is an image that is an (think of it as a tarball, or archive) executable package that can stand on its own. The image has everything it needs to run such as software, runtimes, tools, libraries, etc. Check out Docker for more information.
Docker container are nothing but processes which are spawned using image as a source.
The processes are sandboxed(isolated) from other processes in terms of namespaces and controlled in terms of memory, cpu, etc. using control groups. Control groups and namespaces are Linux kernel features which help in creating a sandboxed environment to run processes in isolation.
Container is a name docker uses to indicate these sandboxed processes.
Some trivia - the concept sandboxing process is also present in FreeBSD and it is called Jails.
While the concept isn’t new in terms on core technology. Docker were innovative to imagine entire ecosystem in terms of containers and provide excellent tools on top of kernel features.
First of all you (generally) start with a Dockerfile which is a script where you setup the docker environment in which you are going to work (the OS, the extra packages etc). If you want is like the source code in typical programming languages.
Dockerfiles are built (with the command sudo docker build pathToDockerfile/ and the result is an image. It is basically a built (or compiled if you prefer) and executable version of the environment described in you Dockerfile.
Actually you can download docker images directly from dockerhub.
Continuing the simile it is like the compiled executable.
Now you can run the image assigning to it a name or setting different attributes. This is a container. Think for example to a server environment where you might need the same service to be instantiated the same time more than once.
Continuing again the simile this is like having the same executable program being launched many times at the same time.

How are Packer and Docker different? Which one should I prefer when provisioning images?

How are Packer and Docker different? Which one is easier/quickest to provision/maintain and why? What is the pros and cons of having a dockerfile?
Docker is a system for building, distributing and running OCI images as containers. Containers can be run on Linux and Windows.
Packer is an automated build system to manage the creation of images for containers and virtual machines. It outputs an image that you can then take and run on the platform you require.
For v1.8 this includes - Alicloud ECS, Amazon EC2, Azure, CloudStack, DigitalOcean, Docker, Google Cloud, Hetzner, Hyper-V, Libvirt, LXC, LXD, 1&1, OpenStack, Oracle OCI, Parallels, ProfitBricks, Proxmox, QEMU, Scaleway, Triton, Vagrant, VirtualBox, VMware, Vultr
Docker's Dockerfile
Docker uses a Dockerfile to manage builds which has a specific set of instructions and rules about how you build a container.
Images are built in layers. Each FROM RUN ADD COPY commands modify the layers included in an OCI image. These layers can be cached which helps speed up builds. Each layer can also be addressed individually which helps with disk usage and download usage when multiple images share layers.
Dockerfiles have a bit of a learning curve, It's best to look at some of the official Docker images for practices to follow.
Packer's Docker builder
Packer does not require a Dockerfile to build a container image. The docker plugin has a HCL or JSON config file which start the image build from a specified base image (like FROM).
Packer then allows you to run standard system config tools called "Provisioners" on top of that image. Tools like Ansible, Chef, Salt, shell scripts etc.
This image will then be exported as a single layer, so you lose the layer caching/addressing benefits compared to a Dockerfile build.
Packer allows some modifications to the build container environment, like running as --privileged or mounting a volume at build time, that Docker builds will not allow.
Times you might want to use Packer are if you want to build images for multiple platforms and use the same setup. It also makes it easy to use existing build scripts if there is a provisioner for it.
Expanding on the Which one is easier/quickest to provision/maintain and why? What are the pros and cons of having a docker file?`
From personal experience learning and using both, I found: (YMMV)
docker configuration was easier to learn than packer
docker configuration was harder to coerce into doing what I wanted than packer
speed difference in creating the image was negligible, after development
docker was faster during development, because of the caching
the docker daemon consumed some system resources even when not using docker
there are a handful of processes running as the daemon
I did my development on Windows, though I was targeting LINUX servers for running the images.
That isn't an issue during development, except for a foible of running Docker on Windows.
The docker daemon reserves various TCP port ranges for itself
The ranges might change every time you reboot your system or restart the daemon
The only error message is to the effect: can't use that port! but not why it can't
BTW, The workaround is to:
turn off Hypervisor
reboot
reserve the public ports you want your host system to see
turn on hypervisor
reboot
Running packer on Windows, however, the issue I found is that the provisioner I wanted to use, ansible, doesn't run on Windows.
Sigh.
So I end up having to run packer on a LINUX system after all.
Just because I was feeling perverse, I wrote a Dockerfile so I could run both packer and ansible from my Windows station in a docker container using that image.
Docker builds images using a Dockerfile.
These can be run (Docker containers).
Packer also builds images. But you don't need a Dockerfile. And you get the option of using Provisioners such as Ansible which lets you create vastly more customisable images. It isn't used for running these images.

Docker Base Image - which flavour?

We're running a Java based Webshop on SLES12.
Currently we're deciding if we want to run the Webshop in Future as a Docker Container.
At least in our Test-Environment we will host the Webshop as Docker Container.
My Question is now: How important is it to choose the base image of the Docker Container as Production near as possible? Which means: is it necessary (or recommendable) to build the Docker Container on a SLES (or opensuse) Base Image or is it OK to keep Debian as Base Image?
What's the major difference between Debian and Suse base Images (except Packaging Tool, directoy structure and Base Image Size)
How important is it to choose the base image of the Docker Container as Production near as possible?
It is not important. It is mandatory. You docker image have to be as near as possible from your prod environment if you intend to use docker to develop and not in prod.
Which means: is it necessary (or recommendable) to build the Docker Container on a SLES (or opensuse) Base Image or is it OK to keep Debian as Base Image?
If you are not running your project via docker in prod, you need to be the nearest so if you are running on debian, use debian as base image. If you intend to run on prod with docker, it is better to keep the image as light as possible (the more if you intend to make it public). But keeping debian base is ok.

How to use Docker to obtain JDK + MySQL + Tomcat container?

My target is to obtain "JDK + MySQL + Tomcat" Docker image.
Am I supposed to think to achieve this by building image like this:
docker build -t myImage .
with following Dockerfile?
FROM mysql
FROM tomcat
FROM openjdk
And then run this image like this:
docker run -d -p 80:80 myImage
Should I use yaml instead? What is best practice about running tomcat and mysql server - inside yaml or by hands in console?
First of all, You need to split your application in microservices, thats docker's Philosophy, And has many advantages including:
Improved fault isolation
Eliminates long-term commitment to a single technology stack
Makes it easier for a new developer to understand the functionality
of a service
Easier upgrade management
In your case I recommend Separating JDK, Mysql, Tomcat in their own containers. There are Already Library images (Officials) available on dockerhub. You can use User Defined Networks to connect your 3 containers together.
Using FROM directive Multiple times will not combine images for you. The final FROM will take over. Source.
Finally if you insist combining them in a single image, You should choose a base image like Debian and use Run Directives to Install required Packages and build the container.

Building images on docker

Can images in docker be installed by source code. What I mean is I want to build my environment with several components and their dependencies. I want to build the components by executing the source code. Does docker allow me to do something like that ?
Sounds like you want a dynamic docker build process. For this you need docker 1.9 up , use --build-args to pass argument variables . You can build multiple images from a single docker file passing in different argument values each time.
Obviously suffers the reproducibility issue discussed.
Yes, it allows you to do that. You need to start with a base image. For example Ubuntu:
docker pull ubuntu
docker run -t -i ubuntu /bin/bash
After that you will have a bash running inside of your container. Then you can apt-get stuff, run code, change configurations, clone repos and whatever else you want. After that to convert your container into an image you need to commit the container.
Be aware that this is not the Docker way of building infrastructure. The correct way is to create a recipe for building you images by using other base images and standard Docker instructions. This will allow your infrastructure to be stateless, faster to build and will provide more reproducibility.

Resources