Redeploy all docker services by image on my swarm - docker

my workflow is:
push code with git
Build docker image on my CI server (drone.io)
Push image to my private registry
Here I need to update all services in all stacks with this new image version from registry
Have anyone idea how to do it in a proper way?
I saw an official tutorial for rolling updates in docker docs.
docker service update --image <image>:<tag> service_name
But there is a problem that I don't know service and stack names.

Related

How to deploy a docker image from dockerhub to a server with github actions

I want to implement CI/CD for my application, so far I have managed to build and upload my image on docker hub with GitHub actions. Now I need a way to pull that image on my VPS and run the docker image. I do not know how to achieve that, I tried multiple youtube videos but none show that.
Could someone point me to the right direction?
What I have done is:
Setup a webhook on vps, this serves as a webhook server, define a endpoint and a script to redeploy (Will be executed when endpoint is called and met)
In your GitHub ctions flow, add new a step, send a request to this webhook server endpoint
Usually flow after image exists in Dockerhub is:
you use the docker login command to log in to the user which has permissions to pull the image
You can either pre pull the image using the docker pull command or you can straight on just use the docker run command and it will pull the image if not existing and run it.
For an example with Nginx, the image resides in Dockerhub and with help of official docs you can see
docker run --name mynginx1 -p 80:80 -d nginx
the command which will pull the Nginx image (latest in this case) and run the container with the name of mynigninx1 and expose the port on host 80 and map it to port 80 inside the container.
There is a docker image you can run on your server to watch your working or selected docker containers and when there is a new push to the docker hub registry then it will update your docker image of your project
Its called watchtower
containrrr/watchtower

Use cache docker image for gitlab-ci

I was wondering is it possible to use cached docker images in gitlab registry for gitlab-ci?
for example, I want to use node:16.3.0-alpine docker image, can I cache it in my gitlab registry and pull it from that and speed up my gitlab ci instead of pulling it from docker hub?
Yes, GitLab's dependency proxy features allow you to configure GitLab as a "pull through cache". This is also beneficial for working around rate limits of upstream sources like dockerhub.
It should be faster in most cases to use the dependency proxy, but not necessarily so. It's possible that dockerhub can be more performant than a small self-hosted server, for example. GitLab runners are also remote with respect to the registry and not necessarily any "closer" to the GitLab registry than any other registry over the internet. So, keep that in mind.
As a side note, the absolute fastest way to retrieve cached images is to self-host your GitLab runners and hold images directly on the host. That way, when jobs start, if the image already exists on the host, the job will start immediately because it does not need to pull the image (depending on your pull configuration). (that is, assuming you're using images in the image: declaration for your job)
I'm using a corporate Gitlab instance where for some reason the Dependency Proxy feature has been disabled. The other option you have is to create a new Docker image on your local machine, then push it into the Container Registry of your personal Gitlab project.
# First create a one-line Dockerfile containing "FROM node:16.3.0-alpine"
docker pull node:16.3.0-alpine
docker build . -t registry.example.com/group/project/image
docker login registry.example.com -u <username> -p <token>
docker push registry.example.com/group/project/image
where the image tag should be constructed based on the example given on your project's private Container Registry page.
Now in your CI job, you just change image: node:16.3.0-alpine to image: registry.example.com/group/project/image. You may have to run the docker login command (using a deploy token for credentials, see Settings -> Repository) in the before_script section -- I think maybe newer versions of Gitlab will have the runner authenticate to the private Container Registry using system credentials, but that could vary depending on how it's configured.

How to pull new docker images and restart docker containers after building docker images on gitlab?

There is an asp.net core api project, with sources in gitlab.
Created gitlab ci/cd pipeline to build docker image and put the image into gitlab docker registry
(thanks to https://medium.com/faun/building-a-docker-image-with-gitlab-ci-and-net-core-8f59681a86c4).
How to update docker containers on my production system after putting the image to gitlab docker registry?
*by update I mean:
docker-compose down && docker pull && docker-compose up
Best way to do this is to use Image puller, lot of open sources are available, or you can write your own on the Shell. There is one here. We use swarm, and we use this hook concept to be triggered from our CI-CD pipeline. Once our build stage is done, we http the hook url, and the docker pulls the updated image. One disadvantage with this is you need a daemon to watch your hook task, that it doesnt crash or go down. So my suggestion is to run this hook task as a docker container with restart-policy as RestartAlways

Make Nginx image available in local/private repository for production safe perspective in kubernetes

How can we make nginx image available in my local/private repository in kubernetes?
Lets say i am using nginx image tag version x.x. I have tested it in my dev and test env and want to move it to prod.
What if the image is not present in nginx repository?
Is there a way to pull the x.x version of nginx to our local/private repository?
There is a high risk if the image is not available. So it would be helpful if anyone guide me how we handle this.
If you have docker installed in your machine, pull docker image
$ docker pull nginx:x.x
Now, you can't use this local docker image inside Kubernetes. You need to do additional thing
Push this image into your docker registry in cloud.
$ docker tag nginx:x.x <your-registry>/nginx:x.x
$ docker push <your-registry>/nginx:x.x
And then use <your-registry>/nginx:x.x from your registry.

How I use a local container in a swarm cluster

A colleague find out Docker and want to use it for our project. I start to use Docker for test. After reading an article about Docker swarm I want to test it.
I have installed 3 VM (ubuntu server 14.04) with docker and swarm. I followed some How To ( http://blog.remmelt.com/2014/12/07/docker-swarm-setup/ and http://devopscube.com/docker-tutorial-getting-started-with-docker-swarm/). My cluster work. I can launch for exemple a basic apache container (the image was pull in the Docker hub) but I want to use my own image (an apache server with my web site).
I tested to load an image (after save it in a .tar) but this option isn't supported by the clustering mode, same thing with the import option.
So my question is : Can I use my own image without to push it in the Docker hub and how I do this ?
If your own image is based on a Dockerfile that you build you can execute the build command on your project while targeting the swarm.
However if the image wasn't built, but created manually you need to have a registry in between that you can push to, either docker hub or some other registry solution like https://github.com/docker/docker-registry

Resources