Docker swarm service create - docker

I am new to Docker Swarm. I am wondering if is it possible to add my own build image to the docker service create command?
For example, I have created an image called testing and I run the following cmd "docker service create [OPTIONS] testing".
Thank you and sry for my broken English.

Yes, it is possible. See this documentation here for the docker service create command.
But the image you want to use must be accessible from the Docker swarm. The standard approach here is to upload the image to the Docker Trusted Registry that should be running alongside the Docker swarm, or have the image uploaded to another registry available to the Swarm. This of course only matters when you are working with a production deployment of Docker swarm with multiple nodes and so on. A local swarm on your own machine can use the same images you can use with docker run.

Related

Build Dockerfile without docker on Kubernetes (AKS 1.19.0) running with containerd

I have Azure devops pipeline, building dockerfile on AKS, as AKS is deprecating docker with the latest release, kindly suggest best practice to have a dockerfile build without docker on AKS cluster.
Exploring on Kaniko, buildah to build without docker..
Nothing has changed. You can still use docker build and docker push on your developer or CI system to build and push the Docker image to a repository. The only difference is that using Docker proper as the container backend within your Kubernetes cluster isn't a supported option any more, but this is a low-level administrator-level decision that your application doesn't know or care about.
Unless you were somehow building using the host docker socket within your Kubernetes cluster, this change will not affect you. And if you were mounting the docker socket from the host in a kubernetes cluster, I'd consider that a security concern that you want to fix.
Docker Desktop runs a docker engine as a container on top of containerd, allowing developers to build and run containers in that environment. Similar can be done with DinD build patterns that run the docker engine inside a container, the difference is the underlying container management tooling is containerd instead of a full docker engine, but the containerized docker engine is indifferent to that.
As an alternative to building within the full docker engine, I'd recommend looking at buildkit which is the current default build tool in docker as of 20.10. It uses containerd and they ship a selection of manifests to run builds directly in kubernetes as a standalone builder.

Is it possible to provide secret to docker run?

I am just wondering whether it's possible to provide docker secret created from any file to docker run as an argument, or is it possible to mount docker secret during docker run.
I know it's possible using docker service where we can specify --secret while creating secret but I didn't see such option for docker run.
The docker secrets functionality is implemented only in swarm mode. You can make a single node swarm cluster very easily (docker swarm init) and run your container as a service. Some will simply mount a file containing the secret for one off containers as a single file read only host volume. e.g.:
docker run -v "$(pwd)/your_secret.txt:/run/secrets/your_secret.txt:ro" image_name
This has less security than a swarm mode secret, but the real value of swarm secrets are in multi-node clusters where you don't want to deploy and manage a directory of sensitive data on worker nodes.
As for docker-compose v3.1 file, it's possible to use docker secrets with docker-compose. https://docs.docker.com/compose/compose-file/compose-file-v3/#secrets

What are the correct steps to re-deploy a docker container on compute engine?

I deploy a docker container on compute engine.
I want to re-deploy this docker container after I build a new docker image with same image name and tag, like webapp:latest
For now, I re-deploy docker container by restart compute engine instance.
I think it's not correct.
What is the correct way for re-deploying a docker container?
When you deploy Docker images on Google Compute Engine virtual machine instances there is some limitation as you can only deploy one container for each VM instance and you can only use Container-Optimized OS images with this deployment method.
I believe the best workaround is to uncheckbox the container option in your instance details to do not deploy a container to the VM instance by using a container-optimized OS image. This option is useful if you want to deploy a single container on the VM.
Instead, install docker in your VM outside the GCP. Also, Consider Kubernetes Engine if you need to deploy multiple containers per VM instance.

difference between docker service and docker container

I can create a docker container by command
docker run <<image_name>>
I can create a service by command
docker service create <<image_name>>
What is the difference between these two in behaviour?
When would I need to create a service over container?
docker service command in a docker swarm replaces the docker run. docker run has been built for single host solutions. Its whole idea is to focus on local containers on the system it is talking to. Whereas in a cluster the individual containers are irrelevant. We simply use swarm services to manage the multiple containers in a cluster. Swarm will orchestrate the containers of the services for us.
docker service create is mainly to be used in docker swarm mode. docker run does not have the concept of scaling up/down. With docker service create you can specify the number of replicas to be created using the --replicas command. This will create and manage multiple replicas of a containers in many different nodes. There are several such options for managing multiple containers using docker service create and other commands under docker service ...
One more note: docker services are for container orchestration systems(swarm). It has built in facility for failure recovery. ie. it recreates a container on failure. docker runwould never recreate a container if it fails. When the docker service commands are used we are not directly asking to perform action like "create a single container", rather we are saying to the orchestration system to "put this job in your queue and when you can get to it perform that action on the swarm". This means it has rollback facilities, failure mitigation and lots of intelligence built in.
You need to consider using docker service create when in swarm mode and docker run when not in swarm mode. You can lookup on docker swarms to understand docker services.
There is no real difference. In the official documentation you can read "Services are really just containers in production".
Services can be declared in "docker-compose.yml" and can be started from it. Once started, they will run as containers.
It is just a common way to name parts of your stack.

docker-compose swarm without docker-machine

After looking through docker official swarm explanations, github issues and stackoverflow answers im still at a loss on why i am having the problem that i have.
Issue at hand: docker-compose up starts services not in the swarm even though swarm is active and has 2 nodes.
Im using 1.12.1 docker version.
Looking at swarm tutorial i was able to start and scale my swarm using docker service create without any issues.
running docker-compose up with version 2 docker-compose.yml results in services starting outside of swarm, i can see them through docker ps but not docker service ls
I can see that docker-machine as the tool that solves this problems, but then again it needs virtual box to be installed.
so my questions would be
Can i use docker-compose with docker-swarm (NOT docker-engine) without docker-machine and without experimental build bundle functionality?
If docker service create can start a service on any nodes is it an indication that network configuration of the swarm is correct ?
What is the advantages/disadvantages of docker-machine versus experimental build functionality
1) No. Docker Compose isn't integrated with the new Swarm Mode yet. Issue 3656 in GitHub is tracking that. If you start containers on a swarm with Docker Compose at the moment, it uses docker run to start containers, which is why you see them all on one node.
2) Yes. Actually you can use docker node ls on the manager to confirm all the nodes are up and active, and docker node inspect to check a particular node, you don't need to create a service to validate the swarm.
3) Docker Machine is also behind the 1.12 release, so if you start a swarm with Docker Machine it will be the 'old' type of swarm. The old Docker Swarm product needed a whole lot of extra setup for a key-value store, TLS etc. which Swarm Mode does for free.
1) You can't start services using docker-compose on the new Docker "Swarm Mode". There's a feature to convert a docker-compose file to the new dab format which is understood by the new swarm mode but that's incomplete and experimental at this point. You basically need to use bash scripts to start services at the moment.
2) The nodes in a swarm (swarm mode) interact using their own overlay network. It's the one named ingress when you do docker network ls. You need to setup your own overlay network to run services in. eg:
docker network create -d overlay mynet
docker service create --name serv1 --network mynet nginx
3) I'm not sure what feature you mean by "experimental build'. docker-machine is just a way to create hosts (the nodes). It facilitates the setting up of the docker daemon on each host, the certificates and allows some basic maintenance (renewing the certs, stopping/starting a host if you're the one who created it). It doesn't create services, volumes, networks or manages them. That's the job of the docker api.

Resources