Can Kubernetes ever create a Docker image? - docker

I'm new to Kubernetes and I'm learning about it.
Are there any circumstances where Kubernetes is used to create a Docker image instead of pulling it from a repository ?

Kubernetes natively does not create images. But you can run a piece of software such kaniko in the kubernetes cluster to achieve it. Kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster.
The kaniko executor image is responsible for building an image from a Dockerfile and pushing it to a registry. Within the executor image, we extract the filesystem of the base image (the FROM image in the Dockerfile). We then execute the commands in the Dockerfile, snapshotting the filesystem in userspace after each one. After each command, we append a layer of changed files to the base image (if there are any) and update image metadata

Several options exist to create docker images inside Kubernetes.
If you are already familiar with docker and want a mature project you could use docker CE running inside Kubernetes. Check here: https://hub.docker.com/_/docker and look for the dind tag (docker-in-docker). Keep in mind there's pros and cons to this approach, so take care to understand them.
Kaniko seems to have potential but there's no version 1 release yet.
I've been using docker dind (docker-in-docker) to build docker images that run in production Kubernetes cluster with good results.

Related

Dockerfile FROM command - Does it always download from Docker Hub?

I just started working with docker this week and came across a 'dockerfile'. I was reading up on what this file does, and the official documentation basically mentions that the FROM keyword is needed to build a "base image". These base images are pulled from Docker hub, or downloaded from there.
Silly question - Are base images always pulled from docker hub?
If so and if I understand correctly I am assuming that running the dockerfile to create an image is not done very often (only when needing to create an image) and once the image is created then the image is whats run all the time?
So the dockerfile then can be migrated to which ever enviroment and things can be set up all over again quickly?
Pardon the silly question I am just trying to understand the over all flow and how dockerfile fits into things.
If the local (on your host) Docker daemon (already) has a copy of the container image (i.e. it's been docker pull'd) specified by FROM in a Dockerfile then it's cached and won't be repulled.
Container images include a tag (be wary of ever using latest) and the image name e.g. foo combined with the tag (which defaults to latest if not specified) is the full name of the image that's checked i.e. if you have foo:v0.0.1 locally and FROM:v0.0.1 then the local copy is used but FROM foo:v0.0.2 will pull foo:v0.0.2.
There's an implicit docker.io prefix i.e. docker.io/foo:v0.0.1 that references the Docker registry that's being used.
You could repeatedly docker build container images on the machines where the container is run but this is inefficient and the more common mechanism is that, once a container image is built, it is pushed to a registry (e.g. DockerHub) and then pulled from there by whatever machines need it.
There are many container registries: DockerHub, Google Artifact Registry, Quay etc.
There are tools other than docker that can be used to interact with containers e.g. (Red Hat's) Podman.

How can I use Helm to deploy a docker image from my file system?

How can I use Helm to deploy a docker image from my local file system?
What's the syntax in the chart for that?
I'm not looking to pull from the local docker daemon/repository, but from the file system proper.
file:///mumble/whatever as produced by docker build -o type=local,dest=mumble/whatever
The combined Helm/Kubernetes/Docker stack doesn't really work that way. You must build your image locally and push it to some sort of registry before you can run it (or be using a purely local environment like minikube). It's unusual to have "an image in the filesystem" at all.
Helm knows nothing about building images; for that matter, Helm on its own doesn't really know much about images at all. It knows how to apply the Go templating language to text files to (hopefully) produce YAML, and to send those to the Kubernetes cluster, but Helm has pretty limited awareness of what those mean. In a Helm chart you'd typically write something like
image: {{ .Values.registry }}/{{ .Values.image }}:{{ .Values.tag }}
but that doesn't "mean" anything to Helm.
Kubernetes does understand image: with its normal meaning, but again that generally refers to an image that lives in Docker Hub or a similar image registry, not something "on a filesystem". The important thing for Kubernetes is that it expects to have many nodes, so if an image isn't already on a node, it knows how to pull it from the registry. A very typical setup in fact is for nodes to be automatically created and destroyed (via the cluster autoscaler) and you'd almost never have anything of importance on the node outside of a pod.
This means, in practical use, you basically must have a container registry. You can use Docker Hub, or run one locally, or use something provided by your cloud provider. If you're using a desktop-oriented Kubernetes setup, both minikube and kind have instructions for running a registry as part of that setup.
Basically the one exception this is, in minikube, you can use minikube's Docker daemon directly by running
eval $(minikube docker-env)
docker build -t my-image .
This builds the image normally, keeping it "inside Docker space", except here it's inside the minikube VM.
A Docker image in a file isn't especially useful. About the only thing you can do with it is docker load it; even plain Docker without Kubernetes can't directly run it. If you happened to have the file, you could docker load it into minikube the same way we built it above, or minikube load it. If it's at all an option, though, using the standard Docker registry system will generally be easier than trying to replicate it with plain files.

What is the workflow of a docker image building?

I know to use the "docker build" to build an image from Dockerfile and it would package a tar to Docker daemon.
How does it work on Docker daemon when building the image? Is it create a temporary container?
A Docker image is roughly equivalent to a "snapshot" in other virtual machine environments. It is a record of a Docker virtual machine, or Docker container, at a point in time. Think of a Docker image as a digital picture. A Docker container can be seen as a printout of that picture. Docker images have the special characteristic of being immutable. They can't be modified, but they can be duplicated and shared or deleted. The immutability is useful when testing new software or configurations because no matter what happens, the image will still be there, as usable as ever.

nexus 3 create docker with pre define configuration

I want to create nexus 3 docker with pre-define configuration (few repos and dummy artifacts) for testing my library.
I can't call the nexus API from the docker file, because it require running nexus.
I tried to up the nexus 3 container, config it manually and create image from container
docker commit ...
the new image created, but when I start the new container from it, it doesn't contains all my manual configuration that I did before.
How can I customize the nexus 3 image?
If I understand well, you are trying to create a portable, standalone customized nexus3 installation in a self-contained docker image for testing/distribution purpose.
Doing this by extending the official nexus3 docker image will not work. Have a look at their Dockerfile: it defines a volume for /nexus_data and there is currently no way of removing this from a child image.
It means that when your start a container without any specific options, a volume is created for each new container. This is why your committed image starts with blank data. The best you can do is to name the data volume when you start the container (option -v nexus_data:/nexus_data for docker run) so that the same volume is being reused. But the data will still be in your local docker installation, not in the image.
To do what you wish, you need to recreate you own docker image without a data volume. You can do it from the above official Dockerfile, just remove the volume line. Then you can customize and commit your container to an image which will contain the data.

Launch different containers from a Dockerfile

Is there any possibility to launch containers of different images simultaneously from a single "Dockerfile" ?
There is a misconception here. A Dockerfile is not responsible for launching a container. It's responsible for building an image (which you can then use docker run ... to create a container from). More info can be found on the official Docker documentation.
If you need to run many docker containers simultaneously I'd suggest you had a look at Docker Compose which you can use to run containers based on images either from the docker registry or custom-built via Dockerfiles
Also somewhat new to Docker, but my understanding is that the Dockerfile is used to create Docker images, and then you start containers from images.
If you want to run multiple containers you need to use an orchestrator like docker swarm or Kubernetes.
Those have their own configuration files that tell it which images to spin up.

Resources