can I deploy local build to docker swarm in virtual machine? - docker

I am learning Docker and trying to follow the Docker tutorial and am in step 4 here.
Basically in this step, we are creating 2 VMs for docker swarm: 1 as swarm manager and 1 as swarm worker.
I think it pulls docker-hub pushed image to the virtual machines to get the service working in swarm. Problem is I am not pushing my built image to docker hub.
My question is, can I use local build to deploy to the swarm VM?
I tried to change image line the example docker-compose.yml to build like so:
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
# image: friendlyhello
build: .
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
networks:
webnet:
it of course does not work, which is why I am asking if there is a way to do this?

You can create a local registry on the vm or your local machine and push/pull images from local repo
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Then name/tag your images using
localhost:5000/Image_Name:Tag
Then push images using
docker push localhost:5000/Image_Name:Tag
This will let you save your images in a local registry that your swarm can use without pushing to dockerhub

Related

Docker push on local registry gets stuck

I'm trying to push an image to a local docker registry deployed with docker-compose the following way:
services:
docker-registry:
image: registry:2
restart: unless-stopped
environment:
- REGISTRY_STORAGE_DELETE_ENABLED=true
volumes:
- registry-data:/var/lib/registry
Note: this is inside a Dev Container and registry port is forwarded directly from .devcontainer.json, but it is equivalent to forwarding 5000:5000 in docker-compose, I have no problem contacting the registry
Whenever I attempt to push an image on the registry, I have a layer getting stuck to 48.8MB (attempted a lot of times, recreating the service, deleting the volume, restarting everything)
~ docker push localhost:5000/some-image
Using default tag: latest
The push refers to repository [localhost:5000/some-image]
1562583dd903: Preparing
1562583dd903: Pushing 227.3kB/19.88MB
1562583dd903: Pushing 6.14MB/19.88MB
1562583dd903: Pushing 9.122MB/19.88MB
1562583dd903: Pushing 18.3MB/19.88MB
1562583dd903: Pushing 19.98MB
86959104e6a0: Pushed
86959104e6a0: Pushing 18.25MB/2.068GB
86959104e6a0: Pushing 22.7MB/2.068GB
86959104e6a0: Pushing 50.83MB/2.068GB
a3038b-3bfe-4903-951d-8d5529552f96
c735c85250bd: Mounted from some-other-image
b0f6b3bc04d7: Mounted from some-other-image
f31afd463445: Mounted from some-other-image
a9099c3159f5: Pushing [===================> ] 48.8MB/124.1MB
The command is then stuck forever. I tried pushing from docker command on my host and also from docker API using Golang code, I have encountered the same exact behaviour.
Any idea on what is wrong here?
I found a solution to the problem (but not the reason), this seems related to Dev Containers.
I ran the service this way in the docker-compose.yml run by devcontainer.json:
services:
docker-registry:
image: registry:2
restart: unless-stopped
environment:
- REGISTRY_STORAGE_DELETE_ENABLED=true
volumes:
- registry-data:/var/lib/registry
In devcontainer.json, I forwarded the ports this way as I'm used to doing to have the ports listed in VS Code ports section:
"forwardPorts": [
"docker-registry:5000",
],
"portsAttributes": {
"docker-registry:5000": {
"label": "Docker registry",
"onAutoForward": "silent",
"requireLocalPort": true
}
This resulted in correct forward of port 5000 of the container to port 5000 on the localhost.
However, by removing these references from .devcontainer and forwarding ports directly from the docker-compose.yml, I no longer have the initial issue:
services:
docker-registry:
image: registry:2
restart: unless-stopped
environment:
- REGISTRY_STORAGE_DELETE_ENABLED=true
volumes:
- registry-data:/var/lib/registry
ports:
- 5000:5000

Running kubernetes on windows

I am using docker toolbox (windows 7) to create my docker image, now I would like to use kubernetes as a container orchestration.
I want to run Kubernetes locally, I install it using minikube and kubectl. Is it the best way? Can I use k3s on windows7 ?
And is it possible to create a private registry as docker hub on windows 7?
Thank you.
The easiest way to experiment with Kubernetes locally is with Minikube.
As for a docker registry, I would suggest running the official registry image from Docker Hub. When you want to step up, Nexus is a really nice choice.
If you want to play with Kubernetes, the latest version of Docker Desktop allows you to setup a fully functional Kubernetes environment on your desktop, and enable this with a click, see image below and here Docker docs
A private registry allows you to store your images, and pull offical images provided by vendors. That's a cloud service, Docker Hub is just one of many repositories available.
Docker Desktop includes a standalone Kubernetes server and client, as well as Docker CLI integration. The Kubernetes server runs locally within your Docker instance, is not configurable, and is a single-node cluster.
Refer to :
https://docs.docker.com/docker-for-windows/kubernetes/
The Kubernetes server runs within a Docker container on your local system, and is only for local testing. When Kubernetes support is enabled, you can deploy your workloads, in parallel, on Kubernetes, Swarm, and as standalone containers. Enabling or disabling the Kubernetes server does not affect your other workloads.
You can deploy a stack on Kubernetes with docker stack deploy, the docker-compose.yml file, and the name of the stack.
docker stack deploy --compose-file /path/to/docker-compose.yml mystack
docker stack services mystack
To be able running on kubernetes specify the orchestrator in your stack deployment.
docker stack deploy --orchestrator kubernetes --compose-file /path/to/docker-compose.yml mystack
Create a volume directory for nexus-data. I used /nexus-data directory which is mount point of the second disk
mkdir /nexus-data
chown -R 200 /nexus-data
Exaples Apps :
version: '3.3'
services:
traefik:
image: traefik:v2.2
container_name: traefik
restart: always
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=true"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
ports:
- 80:80
- 443:443
networks:
- nexus
volumes:
- /var/run/docker.sock:/var/run/docker.sock
nexus:
container_name: nexus
image: sonatype/nexus3
restart: always
networks:
- nexus
volumes:
- /nexus-data:/nexus-data
labels:
- traefik.port=8081
- traefik.http.routers.nexus.rule=Host(`NEXUS.mydomain.com`)
- traefik.enable=true
- traefik.http.routers.nexus.entrypoints=websecure
- traefik.http.routers.nexus.tls=true
- traefik.http.routers.nexus.tls.certresolver=myresolver
networks:
nexus:
external: true

Redis cluster with docker swarm using docker compose

I'm just learning docker and all of its goodness like swarm and compose. My intention is to create a Redis cluster in docker swarm.
Here is my compose file -
version: '3'
services:
redis:
image: redis:alpine
command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 60000","--cluster-require-full-coverage no"]
deploy:
replicas: 5
restart_policy:
condition: on-failure
ports:
- 6379:6379
- 16379:16379
networks:
host:
external: true
If I add the network: - host then none of the containers start, if I remove it then the containers start but when I try to connect it throws an error like CLUSTERDOWN Hash slot not served.
Specs -
Windows 10
Docker Swarm Nodes -
2 Virtual Box VMs running Alpine Linux 3.7.0 with two networks
VirtualBox VM Network -
eth0 - NAT
eth1 - VirtualBox Host-only network
Docker running inside the above VMs -
17.12.1-ce
This seems to work for me, network config from here :
version: '3.6'
services:
redis:
image: redis:5.0.3
command:
- "redis-server"
- "--cluster-enabled yes"
- "--cluster-config-file nodes.conf"
- "--cluster-node-timeout 5000"
- "--appendonly yes"
deploy:
mode: global
restart_policy:
condition: on-failure
networks:
hostnet: {}
networks:
hostnet:
external: true
name: host
Then run for example: echo yes | docker run -i --rm --entrypoint redis-cli redis:5.0.3 --cluster create 1.2.3.4{1,2,3}:6379 --cluster-replicas 0
Replace your IPs obviously.
For anyone struggling with this unfortunately this can't be done via docker-compose.yml yet. Refer to this issue Start Redis cluster #79. The only way to do this is by getting the IP address and ports of all the nodes that are running Redis and then running this command in any of the swarm nodes.
# Gives you all the command help
docker run --rm -it thesobercoder/redis-trib
# This creates all master nodes
docker run --rm -it thesobercoder/redis-trib create 172.17.8.101:7000 172.17.8.102:7000 172.17.8.103:7000
# This creates slaves nodes. Note that this requires at least six nodes running master
docker run --rm -it thesobercoder/redis-trib create --replicas 1 172.17.8.101:7000 172.17.8.102:7000 172.17.8.103:7000 172.17.8.104:7000 172.17.8.105:7000 172.17.8.106:7000
here is repo for redis cluster
https://github.com/jay-johnson/docker-redis-cluster/blob/master/docker-compose.yml

Docker stack deploy rolling updates volume issue

I'm running docker for a production PHP-FPM/Nginx application, I want to use docker-stack.yml and deploy to a swarm cluster. Here's my file:
version: "3"
services:
app:
image: <MYREGISTRY>/app
volumes:
- app-data:/var/www/app
deploy:
mode: global
php:
image: <MYREGISTRY>/php
volumes:
- app-data:/var/www/app
deploy:
replicas: 2
nginx:
image: <MYREGISTRY>/nginx
depends_on:
- php
volumes:
- app-data:/var/www/app
deploy:
replicas: 2
ports:
- "80:80"
volumes:
app-data:
My code is in app container with image from my registry.
I want to update my code with docker service update --image <MYREGISTRY>/app:latest but it's not working the code is not changed.
I guess it uses the local volume app-data instead.
Is it normal that the new container data doesn't override volume data?
Yes, this is the expected behavior. Named volumes are only initialized to the image contents when they are empty (the default state when first created). Updating the volume any time after that point would risk data loss from overwriting or deleting volume data that you explicitly asked to be preserved.
If you need the files to be updated with every new image, then perhaps they shouldn't be in a volume? If you do need these inside a volume, then you may need to create a procedure to update the volumes from the image, e.g. if this were a docker run, you could do:
docker run -v app-data:/target --rm <your_registry>/app cp -a /var/www/app/. /target/.
Otherwise, you can delete the volume, or simply remove all files from the volume, and restart your stack to populate it again.
I was having the same issue that I have app and nginx containers sharing the same volume. My current solution having a deploy script which runs
docker service update --mount-add mount service
for app and nginx after docker stack deploy. It will force to update the volume for app and nginx containers.

how to set the service mode when using docker compose?

I need to set service mode to global while using compose files .
Any chance we can use this in compose file ?
I have a requirement where for a service there should be exactly one container on every node/host .
This doesn't happen with "spread strategy" of swarm if a node goes down & comes up , it just attains the equal number of containers on each host irrespective of services .
https://github.com/docker/compose/issues/3743
We can do this easily now with docker compose v3 (version 3) under the deploy(mode) section.
Prerequisites -
docker compose version should be 1.10.0+
docker engine version should be 1.13.0+
Example compose file -
version: "3"
services:
nginx:
image: nexus3.example.com/prd-nginx-sm:v1
ports:
- "80:80"
networks:
- cheers
volumes:
- logs:/rest/out/
deploy:
mode: global
labels:
feature.description: "Frontend"
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: any
command: "/usr/sbin/nginx"
networks:
cheers:
volumes:
logs:
data:
Deploy the compose file -
$ docker stack deploy -c sm-deploy-compose.yml --with-registry-auth CHEERS
This will deploy nginx container on all the nodes participating in the cluster .

Resources