Which Docker images will run on Kubernetes? - docker

How can I find out if a given Docker image can be run using Kubernetes?
What should I do to help ensure that my images will run well in any Kubernetes-managed environment?

All Docker images can be run on Kubernetes -- it uses Docker to run the images.
You can expose ports from containers just like when using Docker directly, pass in environment variables, mount storage volumes from the host into the container, and more.
If you have anything particular in mind, I'd be interested in hearing about any image you find that can't be run using Kubernetes.

It depends on the processor architecture of the machine. If the image is compatible with the underlying hardware architecture, the K8s master node should be able to deploy the container. I had this problem when I try to deploy a Docker container on Raspberry pi 3(ARM arch. machine) with the Docker image which is built for x86-64.
For practical, try to deploy a container with the following image in X86-64 machine:
docker pull arifch2009/hello
The error will be shown :
standard_init_linux.go:178: exec user process caused "exec format error"
This is a simple application to print "Hello World". However, the program/application inside the image is compiled in arm architecture. So, the binary file cannot be executed in other than ARM machine.

Related

What is the workflow of a docker image building?

I know to use the "docker build" to build an image from Dockerfile and it would package a tar to Docker daemon.
How does it work on Docker daemon when building the image? Is it create a temporary container?
A Docker image is roughly equivalent to a "snapshot" in other virtual machine environments. It is a record of a Docker virtual machine, or Docker container, at a point in time. Think of a Docker image as a digital picture. A Docker container can be seen as a printout of that picture. Docker images have the special characteristic of being immutable. They can't be modified, but they can be duplicated and shared or deleted. The immutability is useful when testing new software or configurations because no matter what happens, the image will still be there, as usable as ever.

When user linux docker image no issues, but windows docker image fails

I get the following error when using windows docker golang image...
Job failed: Error response from daemon: manifest for
golang:latest-windowsservercore-1803 not found
line from .gitlab-ciyml file...
image: golang:latest-windowsservercore
However, when I use the default golang image which is based on linux i think, it works fine with no errors.
the below works...
image: golang:latest
I need the build phase to build windows executable;le hence the change. I have tried lots of different permutations take from...
https://hub.docker.com/_/golang
but nothing works is there something I am doing wrong?
This image is based on Windows Server Core
(microsoft/windowsservercore). As such, it only works in places which
that image does, such as Windows 10 Professional/Enterprise
(Anniversary Edition) or Windows Server 2016.
golang-dockerhub
So if you using gitlib then there is also some limitation and combination of the container.
The Docker executor
GitLab Runner can use Docker to run jobs on user provided images. This
is possible with the use of Docker executor.
The Docker executor when used with GitLab CI, connects to Docker
Engine and runs each build in a separate and isolated container using
the predefined image that is set up in .gitlab-ci.yml and in
accordance in config.toml.
The following table lists what combinations of containers, executors, and OS are supported.
docker executor
You can check also window container limitation here

Docker in Docker, Building docker agents in a docker contained Jenkins Server

I am currently running a Jenkins with Docker. When trying to build docker apps, i am facing some doubt on if i should use Docker in Docker (Dind) by binding the /var/run/docker.sock file or by installing another instance of docker in my Jenkins Docker. I actually saw that previously, it was discouraged to use something else than the docker.sock.
I don't actually understand why we should use something else than the docker daemon from the host apart from not polluting it.
sources : https://itnext.io/docker-in-docker-521958d34efd
Best solution for "jenkins in docker container needs docker" case is to add your host as a node(slave) in jenkins. This will make every build step (literally everything) run in your host machine. It took me a month to find perfect setup.
Mount docker socket in jenkins container: You will lose context. The files you want to COPY inside image is located inside workspace in jenkins container and your docker is running at host. COPY fails for sure.
Install docker client in jenkins container: You have to alter official jenkins image. Adds complexity. And you will lose context too.
Add your host as jenkins node: Perfect. You have the contex. No altering the official image.
Without completely understanding why you would need to use Docker in Docker - I suspect you need to meet some special requirements considering the environment in which you build the actual image, may I suggest you multistage building of docker images? You might find it useful as it enables you to first build the building environment and then build the actual image (hence the name 'multistage-building). Check it out here: https://docs.docker.com/develop/develop-images/multistage-build/

How are Packer and Docker different? Which one should I prefer when provisioning images?

How are Packer and Docker different? Which one is easier/quickest to provision/maintain and why? What is the pros and cons of having a dockerfile?
Docker is a system for building, distributing and running OCI images as containers. Containers can be run on Linux and Windows.
Packer is an automated build system to manage the creation of images for containers and virtual machines. It outputs an image that you can then take and run on the platform you require.
For v1.8 this includes - Alicloud ECS, Amazon EC2, Azure, CloudStack, DigitalOcean, Docker, Google Cloud, Hetzner, Hyper-V, Libvirt, LXC, LXD, 1&1, OpenStack, Oracle OCI, Parallels, ProfitBricks, Proxmox, QEMU, Scaleway, Triton, Vagrant, VirtualBox, VMware, Vultr
Docker's Dockerfile
Docker uses a Dockerfile to manage builds which has a specific set of instructions and rules about how you build a container.
Images are built in layers. Each FROM RUN ADD COPY commands modify the layers included in an OCI image. These layers can be cached which helps speed up builds. Each layer can also be addressed individually which helps with disk usage and download usage when multiple images share layers.
Dockerfiles have a bit of a learning curve, It's best to look at some of the official Docker images for practices to follow.
Packer's Docker builder
Packer does not require a Dockerfile to build a container image. The docker plugin has a HCL or JSON config file which start the image build from a specified base image (like FROM).
Packer then allows you to run standard system config tools called "Provisioners" on top of that image. Tools like Ansible, Chef, Salt, shell scripts etc.
This image will then be exported as a single layer, so you lose the layer caching/addressing benefits compared to a Dockerfile build.
Packer allows some modifications to the build container environment, like running as --privileged or mounting a volume at build time, that Docker builds will not allow.
Times you might want to use Packer are if you want to build images for multiple platforms and use the same setup. It also makes it easy to use existing build scripts if there is a provisioner for it.
Expanding on the Which one is easier/quickest to provision/maintain and why? What are the pros and cons of having a docker file?`
From personal experience learning and using both, I found: (YMMV)
docker configuration was easier to learn than packer
docker configuration was harder to coerce into doing what I wanted than packer
speed difference in creating the image was negligible, after development
docker was faster during development, because of the caching
the docker daemon consumed some system resources even when not using docker
there are a handful of processes running as the daemon
I did my development on Windows, though I was targeting LINUX servers for running the images.
That isn't an issue during development, except for a foible of running Docker on Windows.
The docker daemon reserves various TCP port ranges for itself
The ranges might change every time you reboot your system or restart the daemon
The only error message is to the effect: can't use that port! but not why it can't
BTW, The workaround is to:
turn off Hypervisor
reboot
reserve the public ports you want your host system to see
turn on hypervisor
reboot
Running packer on Windows, however, the issue I found is that the provisioner I wanted to use, ansible, doesn't run on Windows.
Sigh.
So I end up having to run packer on a LINUX system after all.
Just because I was feeling perverse, I wrote a Dockerfile so I could run both packer and ansible from my Windows station in a docker container using that image.
Docker builds images using a Dockerfile.
These can be run (Docker containers).
Packer also builds images. But you don't need a Dockerfile. And you get the option of using Provisioners such as Ansible which lets you create vastly more customisable images. It isn't used for running these images.

Docker : How to avoid Operation not permitted in Docker Container?

I created one docker image of sles12 machine by taking backing of all file system which are necessary and created one tar file. For creating docker image I run following command -
cat fullbackup.tar | docker import - sles_image
After that I run docker image in container using below command -
docker run --net network1 -i -t sles_image /bin/bash
note - I already set up networking in this docker container (IP address which I want).
Now In my docker container, some applications are already configured because that applications are available in sles12 machine from which I created this docker image. These custom applications are internally running some kernel low level commands like modprobe.
But when I starts my application, application will start correctly. I'm facing this error -
Operation not permitted
How I can give correct permissions so that it will not give me this error?
You might try set the Docker container with Runtime privilege and Linux capabilities, with the
docker run --privileged
If you are on mac resolve the issue by giving files and folder permissions to docker or the other workaround is to manually copying the files to docker instead of mounting them.

Resources