How to deploy to a Docker Swarm from a local dev machine? - docker

I've set up a Docker Swarm consisting of two VMs on my local network, (1 manager, 1 worker). In the manager node, I created a private registry service and I want to deploy a number of locally built images in my local dev machine (which is not in the swarm) to that registry. The Swarm docs and the dozens of examples I've read in the Internet seem not to go beyond the basics, running commands inside the manager node, building, tagging and pushing images from the manager's local cache to the registry in that same node, and I have that uneasy feeling that I'm missing something right on my face.
I see that my machine could simply join the swarm as a manager, owning the registry. The other nodes would automagically receive the updates and my problem would go away. But does this make sense for a production swarm setting, a cluster of nodes serving production code, depending on my dev's home machine - even as non-worker, manager-only?
Things I've tried:
Retagging my local image to <my.node.manager.ip>/my_app:1.0.0, followed by docker-compose push. I can see this does push the image to the manager's registry, but the service fails to start with the message "No such image: <my.node.manager.ip>/my_app:1.0.0"
Creating a context and, from my machine, run docker-compose --context my_context up --no-start. This (re)creates the image in the manager node's local cache, which I can then push to the registry, but it feels very unwieldy as a deploy process.
Should I run a remote script in the manager node to git pull my code and then do the build/push/docker stack deploy?
TL;DR What's the expected steps to deploy an image/app to a Docker Swarm from a local dev machine outside the swarm? Is this possible? Is this supported by Docker Swarm?

After reading a bit more on private registries and tags, I could finally wrap my head around the necessary tagging for my use case to work. My first approach was halfway right, but I had to change my deploy script so as to:
extract the image field from my docker-compose.yml (in the form localhost:5000/my_app:${MY_APP_VERSION-latest}, to circumvent the "No such image" error)
create a second tag for pushing to the remote registry, replacing "localhost" by my manager node address (where the registry is at)
Tag my locally built image with that tag and docker-compose push it
Deploy the app with docker --context <staging|production> stack deploy my_app
I'm answering myself since I did solve my original problem, but would love to see other DevOps implementations for similar scenarios.

Related

How to push a Docker Compose stack to a Docker Swarm without uploading containers to a registry?

I've researched around before asking here, but all answers lead me to the same conclusion:
Build your Docker Compose stack locally
Tag and push the images to a registry (either a private or public one like Docker Hub)
Push the stack to the swarm using docker stack deploy --compose-file docker-compose.yml stackdemo
From here, the stack picks up and "pulls" the images from the registry and runs the containers
Is there no straightforward way to make the following (I think common sense) scenario work seamlessly?
Docker swarm manager has access (SSH keys) to pull the project from Git.
It periodically pulls the project and builds it "locally" using docker-compose up
When the build succeeds (containers are ready), it pushes the stack to the swarm using docker stack deploy, propagating the images to all worker nodes.
In that way, the original "source code" is only known by the Manager Node and only it has direct access to the Git repository.
Maintaining a registry (or paying for a cloud one), seems like a huge disadvantage for using Docker in Swarm Mode.
Side note: I've tried the approach of deploying a registry as a service within the stack and tagging + pushing the images to 127.0.0.1/myimage but that led to a different set of problems of its own - e.g. the fact that worker nodes that do not have an instance of the Registry container running, have no access to pull the image (the registry needs to be replicated to all nodes).
Use the docker save and docker load commands to transfer images from your dev machine to all of your swarm machines.
Docker swarm is orchestration & used or intended for managing docker node cluster. When any service is deployed, docker engine can start it on any of the node in the cluster (node that satisfy placement constraints if provided). Now, if one dont have registry, & images are available on node locally, docker cant verify if all nodes will point to same version of the image. Hence, swarm pulls image from registry & then deploys it to nodes.
Having registry also helps in keeping copy of images & registry keeps all version even if images are prune on one or all of docker nodes. One can enable backup of registry & hence there's no chance for loss of any image built & pushed to a registry.
Starting a registry (at least on localhost) is very easy - but that's not what your question.
Coming to your question, you can keep the compose stack file in the same directory where you have Dockerfile & then in the stack compose file you can write service which will get build at the time you deploy the stack:
version: "3.9"
services:
web:
image: 127.0.0.1:5000/stackdemo
build: .
ports:
- "8000:8000"
Here,
build: .
this line builds the image with given name tagging to registry on localhost - but is not pushed which needs to be done manually.
So, in Dockerfile you can write git pull or git clone & then run command to build your code & so on.
Here's a link which provides simple steps to start a registry & build image on the fly while deploying the stack:
https://docs.docker.com/engine/swarm/stack-deploy/
Also, swarm does not works without registry & hence it's not possible to just save & load image & use with swarm orchestration.

How to get transferable docker compose stack without dockerhub

I have few docker images composed together in the stack using docker-compose.yml.
Now I want to transfer whole docker compose stack to the other host machine without uploading to the dockerhub,
And deploy it on the docker swarm.
I saw there is a thing called docker compose bundle, would that help?
If you’re deploying on a multi-host swarm (or something similar like Kubernetes or Nomad) you all but need a Docker registry. It doesn’t specifically have to be Docker Hub — quay.io, Amazon’s ECR, Google’s GCR, and self-hosted registries all work fine — but you do need to have pushed the built images somewhere where the orchestrator can retrieve them by name.
I’ve never used docker-compose bundle myself, but its documentation also notes that its operation “requires interaction with a Docker registry”.
The only real alternative is using docker save and docker load to manually move images between machines, but as a manual process it will get tedious very quickly, and you need to make sure an identical set of images are on every machine for consistency. Using a registry will be vastly easier.
The easyest way to do it is to use a Docker registry. The problem with Docker Hub is that you can only have one private registry, the rest must be public or paid.
Thankfully, there are other (free) alternatives:
Deploy your own private registry. Here is a nice tutorial where you can try it in the browser.
Use a free private registry. I personnaly use Codefresh. It can automatically build your image from a private repo (like bitbucket who has free plan too), but you can also just use it like a "simple" docker registry and push and pull your Docker images there.

How to "docker push" to dynamic insecure registries?

OS: Amazon Linux (hosted on AWS)
Docker version: 17.x
Tools: Ansible, Docker
Our developers use Ansible to be able to spin up individual AWS spot environments that get populated with docker images that get built on their local machines, pushed into a docker registry created on the AWS spot machine, then pulled down and run.
When the devs do this locally on their Macbooks, ansible will orchestrate building the code with sbt, spin up an AWS spot instance, run a docker registry, push the image into the docker registry, command the instance to pull down the image and run it, run a testsuite, etc.
To make things better and easier for non-devs to be able to run individual test environments, we put the ansible script behind Jenkins and use their username to let ansible create a domain name in Route53 that points to their temporary spot instance environment.
This all works great without the registry -- i.e. using JFrog Artifactory to have these dynamic envs just pull pre-built images. It lets QA team members spin up any version of the env they want. But now to allow it to build code and push, I need to have an insecure registry and that is where things fell apart...
Since any user can run this, the Route53 domain name is dynamic. That means I cannot just hardcode in daemon.json the --insecure-registry entry. I have tried to find a way to set a wildcard registry but it didnt seem to work for me. Also since this is a shared build server (the one that is running the ansible commands) so I dont want to keep adding entries and restarting docker because other things might be running.
So, to summarize the questions:
Is there a way to use a wildcard for the insecure-registry entry?
How can I get docker to recognize insecure-registry entry without restarting docker daemon?
So far I've found this solution to satisfy my needs, but not 100% happy yet. I'll work on it more. It doesn't handle the first case of a wildcard, but it does seem to work for the 2nd question about reloading without restart.
First problem is I was editing the wrong file. It doesn't respect /etc/sysconfig/docker nor does it respect $HOME/.docker/daemon.json. The only file that works on Amazon Linux for me is /etc/docker/daemon.json so I manually edited it and then tested a reload and verified with docker info. I'll work on this more to programmatically be able to insert entries as needed, but the manual test works:
sudo vim /etc/docker/daemon.json
sudo systemctl reload docker.service
docker info

Docker compose deployment

I have a question about docker compose. I am new to docker and I can't figure out the "right" flow for deployment.
Lets assume we have a "Dockerfile" which contain a steps to build an image from project source files.
And we have a "docker-compose.yml" which is actually building this "Dockerfile" along with 2 more services.
It is not important here but lets say they are, nginx, webapi (actual project) and mongodb.
So, if i will run "docker compose up" on my machine - it will create 3 images (webapi, nginx, mongodb) and run them. Everything is perfect here.
Questions is, what i need to do to get it deployed to production. What i have tried:
I can checkout git on production server and run "docker compose up" and it will work. But i think this is not the way to go - use of production server to build projects seems silly.
I can run "docker compose build" locally, get 3 images, push them to docker repository, go to production download images from repository and start them one by one. In this case I don't see a point in "docker compose" at all, I am loosing the way to easily define volumes and relation between images, which I can do with docker compose. It will also require a lot of manual activity, or some custom scripts to automate it.
It seems like, there is a way to use "docker machine" to connect to remote server and use "docker compose up", but I was not able to make it work. For some reasons it was not possible to connect from Windows to a remote docker on Linux.
Before going further with that option I need to understand/confirm, it case of remote docker, and "docker compose up", where the build will happen? And if I have a few volumes defined in "docker-compose.yml" are they going to be created on local machine or on remote?
For my project I went with option that resembles your second proposal but bit more automatic. The CI is doing the docker build webapi as this is the only part of my system that is actually build from sources. Ci is also doing docker push to my private repository. Next step is running docker-compose up on production. The compose is not building the webapi it is only configuring it so rather than using build section its using image. Docker compose is also configuring other services that are required (nginx, mongo) and networks for them to communicate. Even if you have custom image creation for other services you do not require full dev environment to create them. For full automation you can do docker machine to remotely execute it. Note that docker will not update images if they are already downloaded on docker-compose up execution you need to docker pull them.

Local development and swarm service image update

We are using Docker Swarm on developers machines for development. Docker services is using e.g. foo:beta image.
When a developer builds a new feature for foo, he builds a new image of the container locally, under the same name (sha is different).
However, we are not being able to update the service to use the new image version. We tried
docker service update --force --image <component>
w/o success.
We are running the latest edge docker build: 17.05.0-ce-rc1-mac8 (16582)
The key is to use a local tag for images, that does not exist on remote repository. When Swarm can't find the image by given tag on remote repo, it will use the local one.
For that purpose, we tag all developer-related containers also with e.g. dev tag, that only exist on developers machine. This way we can update the image and by forcing the service update, update the running code.

Resources