Does docker-compose configuration cover 100% of the docker CLI? - docker

Trying to figure out the difference between docker and docker-compose, it looks like the docker-compose CLI effectively provides a means of running the docker CLI indirectly via configuration (What is the difference between docker and docker-compose).
Is there anything that you can do with the docker CLI that COULDN'T be specified in docker-compose.yml?

The docker CLI offers more options to you (e.g. docker history to inspect an image's history, just to name one) than the docker-compose.yml. But the latter is meant for a very different purpose, namely making the deployment of multi-container applications easier.
So, to my knowledge, if we just look at the aspects of starting and configuring containers, you can do everything with docker-compose that "plain" docker can do, but in a much more comfortable way.

Related

Use docker image in another docker image

I have two docker images:
CLI tool
Webserver
The CLI tool is a very heavy docker file which takes hours to compile. I am trying to call the CLI tool from the webserver, but not sure how to go from here. Is there a way to make the command created in 1 available in 2?
At this point I tried working with volumes, but no luck. Thanks!
The design of Docker sort-of assumes that containers communicate through a network, not through the command line. So the cleanest solution is to create a simple microservice that wraps the CLI tool and can be called through HTTP.
As a quick and dirty hack, you could also use sshd as such a microservice without writing any code.
An alternative that doesn't involve the network is to make the socket of the Docker daemon available in the webserver container using a bind mount:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Then you should be able to communicate with the host daemon from within the container, provided that you have installed the docker command line tool in the image. However, note that this makes your application strongly dependent on Docker, which might not be ideal. Also note that it essentially gives the container root access to the host system!
(Note that this is different from Docker-in-Docker, which is running a second Docker daemon inside a container and is generally not recommended except for specialized use cases.)

Build Dockerfile without docker on Kubernetes (AKS 1.19.0) running with containerd

I have Azure devops pipeline, building dockerfile on AKS, as AKS is deprecating docker with the latest release, kindly suggest best practice to have a dockerfile build without docker on AKS cluster.
Exploring on Kaniko, buildah to build without docker..
Nothing has changed. You can still use docker build and docker push on your developer or CI system to build and push the Docker image to a repository. The only difference is that using Docker proper as the container backend within your Kubernetes cluster isn't a supported option any more, but this is a low-level administrator-level decision that your application doesn't know or care about.
Unless you were somehow building using the host docker socket within your Kubernetes cluster, this change will not affect you. And if you were mounting the docker socket from the host in a kubernetes cluster, I'd consider that a security concern that you want to fix.
Docker Desktop runs a docker engine as a container on top of containerd, allowing developers to build and run containers in that environment. Similar can be done with DinD build patterns that run the docker engine inside a container, the difference is the underlying container management tooling is containerd instead of a full docker engine, but the containerized docker engine is indifferent to that.
As an alternative to building within the full docker engine, I'd recommend looking at buildkit which is the current default build tool in docker as of 20.10. It uses containerd and they ship a selection of manifests to run builds directly in kubernetes as a standalone builder.

docker-compose.yml vs docker-stack.yml what difference?

I am new docker-user. And in difference manuals I have find usually docker-compose.yml file for description docker job, but on docker site for this goal used docker-stack.yml file. What difference?
docker-compose.yml is for the docker-compose tool which is for multi container docker applications on a single docker engine.
its called with
docker-compose up
docker-stack.yml is for the docker swarm tool. (for orchestration and scheduling).
its called with
docker stack
To add to Gabbax0r reply:
Docker Swarm was a standalone component used to cluster Docker engines as a single one.
As of Docker 1.12 the "Swarm" standalone was integrated inside the Docker engine (read the preamble at this page), and Swarm is (or will be) legacy.
To reply to your original question, it is just different names for different cases, but they both are meant to serve the same purpose.
To reply to your comment question, use docker-compose when you have to orchestrate a multi-container app on a single node; if you have to worry about multi-nodes and load-balancing and all this advanced stuff, you better off go with the Swarm.
The docker-stack.yml has the advantage over docker-compose.yml :
Update separately
When working with services, swarms, and docker-stack.yml files, keep in mind that the tasks (containers) backing a service can be deployed on any node in a swarm.
This may be applied to a different node each time the service is updated.
Deploy remotely
If you are running Docker Swarm on your private host then docker-stack.yml can use to access and deploy your application remotely to the host using an SSH key.
You may even use such a service like Codefresh to do so.

Docker Container management in production environment

Maybe I missed something in the Docker documentation, but I'm curious and can't find an answer:
What mechanism is used to restart docker containers if they should error/close/etc?
Also, if many functions have to be done via a docker run command, say for instance volume mounting or linking, how does one bring up an entire hive of containers which complete an application without using docker compose? (as they say it is not production ready)
What mechanism is used to restart docker containers if they should error/close/etc?
Docker restart policies, as set with the --restart option to docker run. From the docker-run(1) man page:
--restart=""
Restart policy to apply when a container exits (no, on-fail‐
ure[:max-retry], always)
Also, if many functions have to be done via a docker run command, say for instance volume mounting or linking, how does one bring up an entire hive of containers which complete an application without using docker compose?
Well, you can of course use docker-compose if that is the best match for your requirements, even if it is not labelled as "production ready".
You can investigate larger container management solutions like Kubernetes or even OpenStack (although I would not recommend the latter unless you are already familiar with OpenStack).
You could craft individual systemd unit files for each container.

Is it ok to run docker from inside docker?

I'm running Jenkins inside a Docker container. I wonder if it's ok for the Jenkins container to also be a Docker host? What I'm thinking about is to start a new docker container for each integration test build from inside Jenkins (to start databases, message brokers etc). The containers should thus be shutdown after the integration tests are completed. Is there a reason to avoid running docker containers from inside another docker container in this way?
Running Docker inside Docker (a.k.a. dind), while possible, should be avoided, if at all possible. (Source provided below.) Instead, you want to set up a way for your main container to produce and communicate with sibling containers.
Jérôme Petazzoni — the author of the feature that made it possible for Docker to run inside a Docker container — actually wrote a blog post saying not to do it. The use case he describes matches the OP's exact use case of a CI Docker container that needs to run jobs inside other Docker containers.
Petazzoni lists two reasons why dind is troublesome:
It does not cooperate well with Linux Security Modules (LSM).
It creates a mismatch in file systems that creates problems for the containers created inside parent containers.
From that blog post, he describes the following alternative,
[The] simplest way is to just expose the Docker socket to your CI container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other), instead of hacking something together with Docker-in-Docker, start it with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Now this container will have access to the Docker socket, and will therefore be able to start containers. Except that instead of starting "child" containers, it will start "sibling" containers.
I answered a similar question before on how to run a Docker container inside Docker.
To run docker inside docker is definitely possible. The main thing is that you run the outer container with extra privileges (starting with --privileged=true) and then install docker in that container.
Check this blog post for more info: Docker-in-Docker.
One potential use case for this is described in this entry. The blog describes how to build docker containers within a Jenkins docker container.
However, Docker inside Docker it is not the recommended approach to solve this type of problems. Instead, the recommended approach is to create "sibling" containers as described in this post
So, running Docker inside Docker was by many considered as a good type of solution for this type of problems. Now, the trend is to use "sibling" containers instead. See the answer by #predmijat on this page for more info.
It's OK to run Docker-in-Docker (DinD) and in fact Docker (the company) has an official DinD image for this.
The caveat however is that it requires a privileged container, which depending on your security needs may not be a viable alternative.
The alternative solution of running Docker using sibling containers (aka Docker-out-of-Docker or DooD) does not require a privileged container, but has a few drawbacks that stem from the fact that you are launching the container from within a context that is different from that one in which it's running (i.e., you launch the container from within a container, yet it's running at the host's level, not inside the container).
I wrote a blog describing the pros/cons of DinD vs DooD here.
Having said this, Nestybox (a startup I just founded) is working on a solution that runs true Docker-in-Docker securely (without using privileged containers). You can check it out at www.nestybox.com.
Yes, we can run docker in docker, we'll need to attach the unix socket /var/run/docker.sock on which the docker daemon listens by default as volume to the parent docker using -v /var/run/docker.sock:/var/run/docker.sock.
Sometimes, permissions issues may arise for docker daemon socket for which you can write sudo chmod 757 /var/run/docker.sock.
And also it would require to run the docker in privileged mode, so the commands would be:
sudo chmod 757 /var/run/docker.sock
docker run --privileged=true -v /var/run/docker.sock:/var/run/docker.sock -it ...
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container. (Which never ever works for me)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options. The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container: https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md

Resources