Quick and easy way to expose local docker images like a registry? - docker

I am looking to a solution to fasten development with docker images and their testing.
Does it exist a solution that would expose all tagged images from a host through a docker registry API ?
This would allow to do:
docker build
List item
docker tags
NOT populating a registry
and have the docker image already available, and usable in kubernetes for example.
I am surprise this doesn't already exists.
How do people iterate on docker images (fixing config, checking everything deploy correctly, etc ...) ?
I was used to do it locally with docker-compose, but using Kubernetes makes a registry mandatory and seems to complicate things a bit more.

Related

Dockerfile FROM command - Does it always download from Docker Hub?

I just started working with docker this week and came across a 'dockerfile'. I was reading up on what this file does, and the official documentation basically mentions that the FROM keyword is needed to build a "base image". These base images are pulled from Docker hub, or downloaded from there.
Silly question - Are base images always pulled from docker hub?
If so and if I understand correctly I am assuming that running the dockerfile to create an image is not done very often (only when needing to create an image) and once the image is created then the image is whats run all the time?
So the dockerfile then can be migrated to which ever enviroment and things can be set up all over again quickly?
Pardon the silly question I am just trying to understand the over all flow and how dockerfile fits into things.
If the local (on your host) Docker daemon (already) has a copy of the container image (i.e. it's been docker pull'd) specified by FROM in a Dockerfile then it's cached and won't be repulled.
Container images include a tag (be wary of ever using latest) and the image name e.g. foo combined with the tag (which defaults to latest if not specified) is the full name of the image that's checked i.e. if you have foo:v0.0.1 locally and FROM:v0.0.1 then the local copy is used but FROM foo:v0.0.2 will pull foo:v0.0.2.
There's an implicit docker.io prefix i.e. docker.io/foo:v0.0.1 that references the Docker registry that's being used.
You could repeatedly docker build container images on the machines where the container is run but this is inefficient and the more common mechanism is that, once a container image is built, it is pushed to a registry (e.g. DockerHub) and then pulled from there by whatever machines need it.
There are many container registries: DockerHub, Google Artifact Registry, Quay etc.
There are tools other than docker that can be used to interact with containers e.g. (Red Hat's) Podman.

How do I docker buildx build into a local "registry" container

I am trying to build a multi-arch image but would like to avoid pushing it to docker hub. I've had a lot of trouble finding out how to control the export options. is there a way to make "--push" push to a registry of my choosing?
Any help is appreciated
Docker provides a container image for a registry server that you may self run even on localhost, see: Deploying a registry server.
There are other servers|services that implement the registry API (see below) but this is a good place to start.
Conventionally, images pushed|pulled default to Docker registry; unless a registry is explicitly specifed, an image e.g. your-image:your-tag defaults to docker.io/my-image:my-tag. In my opinion, it's a good practice to always include this default to be more transparent about this.
If you run Docker's registry image on localhost on the default port 5000, you'll need to take your images with localhost:5000/your-image:your-tag to ensure that when you docker push localhost:5000/your-image:your-tag, the CLI is able to determine your local registry is the intended destination.
Similarly, if you use e.g. Quay registry, images must be prefixed quay.io, Google Artifact Registry, images are prefixed ${REGION}-docker.pkg.dev/${PROJECT}/${REPOSITORY} etc.
IIRC it's not possible to push to Docker's registry (aka dockerhub) without an account so, as long as you ensure you're not logged in, you should not accidentally push images to Docker's registry.
NOTE You only need to use a registry to ease distribution of container images between machines. If you're only interested in local(host) development, you can docker run ... immediately after a successful docker build without any pushing|pulling (beyond interim images, e.g. FROM).

How can I use Helm to deploy a docker image from my file system?

How can I use Helm to deploy a docker image from my local file system?
What's the syntax in the chart for that?
I'm not looking to pull from the local docker daemon/repository, but from the file system proper.
file:///mumble/whatever as produced by docker build -o type=local,dest=mumble/whatever
The combined Helm/Kubernetes/Docker stack doesn't really work that way. You must build your image locally and push it to some sort of registry before you can run it (or be using a purely local environment like minikube). It's unusual to have "an image in the filesystem" at all.
Helm knows nothing about building images; for that matter, Helm on its own doesn't really know much about images at all. It knows how to apply the Go templating language to text files to (hopefully) produce YAML, and to send those to the Kubernetes cluster, but Helm has pretty limited awareness of what those mean. In a Helm chart you'd typically write something like
image: {{ .Values.registry }}/{{ .Values.image }}:{{ .Values.tag }}
but that doesn't "mean" anything to Helm.
Kubernetes does understand image: with its normal meaning, but again that generally refers to an image that lives in Docker Hub or a similar image registry, not something "on a filesystem". The important thing for Kubernetes is that it expects to have many nodes, so if an image isn't already on a node, it knows how to pull it from the registry. A very typical setup in fact is for nodes to be automatically created and destroyed (via the cluster autoscaler) and you'd almost never have anything of importance on the node outside of a pod.
This means, in practical use, you basically must have a container registry. You can use Docker Hub, or run one locally, or use something provided by your cloud provider. If you're using a desktop-oriented Kubernetes setup, both minikube and kind have instructions for running a registry as part of that setup.
Basically the one exception this is, in minikube, you can use minikube's Docker daemon directly by running
eval $(minikube docker-env)
docker build -t my-image .
This builds the image normally, keeping it "inside Docker space", except here it's inside the minikube VM.
A Docker image in a file isn't especially useful. About the only thing you can do with it is docker load it; even plain Docker without Kubernetes can't directly run it. If you happened to have the file, you could docker load it into minikube the same way we built it above, or minikube load it. If it's at all an option, though, using the standard Docker registry system will generally be easier than trying to replicate it with plain files.

Moving my Docker application which is a collection of containers

I have been reading various articles for migrating my Docker Application into a different machine. All the articles talk about “docker commit” or “export/ import”. This only refers to a single Container, which is first converted to an Image and then we do a “docker run” on the new machine.
But my application is usually made up of several containers, because I am following the best practice of segregating different services.
The question is, how do I migrate or move all the containers that have been configured to join together and run as one. I don’t know whether “swarm” is the correct term for this.
The alternative I see is - simply copy the “docker-compose” and “dockerfile” into the new machine and do a fresh setup of the architecture. Then I copy all the application files. It runs fine.
My purpose, of course is not the only solution, but it's quite nice:
Create docker images in one machine (where you need your Dockerfile)
Upload images to a docker registry (you can use your own docker hub account, or maybe a nexus, or whatever)
2.1. It's also recommended to tag with version your images, and protect overwritting an image with the same version and different code.
Use docker-compose (it's recommended define a docker network for all docker that have to interact among them) to deploy (docker-compose up is like several docker run, but easier to mantain.)
You can deploy in several machines just using the same docker-compose.yml to deploy and access to your registry.
4.1. Deploy can be done in a single host, swarm, kubernetes... (you'd have to translate your docker-compose.yml to kubectl yml file for that)
I agree to the docker-compose suggestion. And to store your images in a registry or on your local machine. Each section in your docker compose file will be separated per service. Each service will have to be written in YAML format.
You are going to want version 3 YAML I believe. Then from there you code something like below. But each service will use your Dockerfile image in your registry or locally in your folder.
version : '3'
services:
drupal:
image:
......ports, volumes, etc
postgres:
image:
......ports, volumes, etc
Disclosure: I took a Docker Course from Bret Fisher on Udemy.

How to get transferable docker compose stack without dockerhub

I have few docker images composed together in the stack using docker-compose.yml.
Now I want to transfer whole docker compose stack to the other host machine without uploading to the dockerhub,
And deploy it on the docker swarm.
I saw there is a thing called docker compose bundle, would that help?
If you’re deploying on a multi-host swarm (or something similar like Kubernetes or Nomad) you all but need a Docker registry. It doesn’t specifically have to be Docker Hub — quay.io, Amazon’s ECR, Google’s GCR, and self-hosted registries all work fine — but you do need to have pushed the built images somewhere where the orchestrator can retrieve them by name.
I’ve never used docker-compose bundle myself, but its documentation also notes that its operation “requires interaction with a Docker registry”.
The only real alternative is using docker save and docker load to manually move images between machines, but as a manual process it will get tedious very quickly, and you need to make sure an identical set of images are on every machine for consistency. Using a registry will be vastly easier.
The easyest way to do it is to use a Docker registry. The problem with Docker Hub is that you can only have one private registry, the rest must be public or paid.
Thankfully, there are other (free) alternatives:
Deploy your own private registry. Here is a nice tutorial where you can try it in the browser.
Use a free private registry. I personnaly use Codefresh. It can automatically build your image from a private repo (like bitbucket who has free plan too), but you can also just use it like a "simple" docker registry and push and pull your Docker images there.

Resources