How to deploy a docker image from dockerhub to a server with github actions - docker

I want to implement CI/CD for my application, so far I have managed to build and upload my image on docker hub with GitHub actions. Now I need a way to pull that image on my VPS and run the docker image. I do not know how to achieve that, I tried multiple youtube videos but none show that.
Could someone point me to the right direction?

What I have done is:
Setup a webhook on vps, this serves as a webhook server, define a endpoint and a script to redeploy (Will be executed when endpoint is called and met)
In your GitHub ctions flow, add new a step, send a request to this webhook server endpoint

Usually flow after image exists in Dockerhub is:
you use the docker login command to log in to the user which has permissions to pull the image
You can either pre pull the image using the docker pull command or you can straight on just use the docker run command and it will pull the image if not existing and run it.
For an example with Nginx, the image resides in Dockerhub and with help of official docs you can see
docker run --name mynginx1 -p 80:80 -d nginx
the command which will pull the Nginx image (latest in this case) and run the container with the name of mynigninx1 and expose the port on host 80 and map it to port 80 inside the container.

There is a docker image you can run on your server to watch your working or selected docker containers and when there is a new push to the docker hub registry then it will update your docker image of your project
Its called watchtower
containrrr/watchtower

Related

Use cache docker image for gitlab-ci

I was wondering is it possible to use cached docker images in gitlab registry for gitlab-ci?
for example, I want to use node:16.3.0-alpine docker image, can I cache it in my gitlab registry and pull it from that and speed up my gitlab ci instead of pulling it from docker hub?
Yes, GitLab's dependency proxy features allow you to configure GitLab as a "pull through cache". This is also beneficial for working around rate limits of upstream sources like dockerhub.
It should be faster in most cases to use the dependency proxy, but not necessarily so. It's possible that dockerhub can be more performant than a small self-hosted server, for example. GitLab runners are also remote with respect to the registry and not necessarily any "closer" to the GitLab registry than any other registry over the internet. So, keep that in mind.
As a side note, the absolute fastest way to retrieve cached images is to self-host your GitLab runners and hold images directly on the host. That way, when jobs start, if the image already exists on the host, the job will start immediately because it does not need to pull the image (depending on your pull configuration). (that is, assuming you're using images in the image: declaration for your job)
I'm using a corporate Gitlab instance where for some reason the Dependency Proxy feature has been disabled. The other option you have is to create a new Docker image on your local machine, then push it into the Container Registry of your personal Gitlab project.
# First create a one-line Dockerfile containing "FROM node:16.3.0-alpine"
docker pull node:16.3.0-alpine
docker build . -t registry.example.com/group/project/image
docker login registry.example.com -u <username> -p <token>
docker push registry.example.com/group/project/image
where the image tag should be constructed based on the example given on your project's private Container Registry page.
Now in your CI job, you just change image: node:16.3.0-alpine to image: registry.example.com/group/project/image. You may have to run the docker login command (using a deploy token for credentials, see Settings -> Repository) in the before_script section -- I think maybe newer versions of Gitlab will have the runner authenticate to the private Container Registry using system credentials, but that could vary depending on how it's configured.

Is it possible to make update the docker image after pushing it to dockerhub/ACR/etc at runtime as docker cp command works on localhost

I have a angular application and I have created an docker image of that, I have published it on Azure Container Register(ACR).
I want to pull the image from ACR and deploy it to Azure App service, and change the images, css files from the docker container at runtime.
I want to know if it is possible to update the images/css file at runtime as we do using docker cp command on localhost.
I would suggest using CI/CD for this purpose.
Just create a webhook in ACR. So, whenever the image gets updates, the WebApp will automatically get "notified" and pull in the new change.

How can I docker commit azure container instance to azure container registry

We have ansible configured to deploy our various applications on IIS environment. I am trying to create a docker image of deployed applications so that I can just start up containers as we need for testing and otherwise.
I am planning to build on the Windows IIS image, start the container on azure, run our ansible to install everything on the server, then save the image on container.
I cannot find any documentation on how I can docker commit the container image into our private azure container registry.
Is it possible?
If you have an existing Docker registry in azure you should be able to use the az acr login --name myregistry command to authenticate to it https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli. Make sure you have a registry created for the container image you want to push up.
Next, you can run the container in azure and do all the installation you want. SSH or RDP into the instance in Azure that is running this container. Now run docker ps and find the container id for the correct container. Next, use docker commit <container id> myregistry.azurecr.io/samples/nginx.
Then, just docker push myregistry.azurecr.io/samples/nginx
Also not sure what your use case is, but starting a container in order to modify and commit it in that way seems like an atypical use case for Docker since the build isn't reproducible via the Dockerfile. Looks like there are ways to replace Dockerfiles using Ansible playbooks with something like ansible-containers https://docs.ansible.com/ansible-container/ so you might want to take a look at that(I've never used this tool).

How to access the locally built docker-image on the docker-swarm manager?

While trying to build a service on docker-machine i got an error of "image doesn't exist" on that docker-machine manager node. As I checked the docker images command on the manager node, no image was there as expected. But on the root docker side I have those images. I want to access these images on the manager node. I've read few articles where it was mentioned that, maybe I've to upload that image on the docker hub then pull it from that hub. But I want to access it locally. Is there any way to do this as I'm newbie to docker.
This is the command what I tried on my manager machine:
docker#manager:~$ docker service create --name "api-client" -p 4200:4200 api_client
This is my docker images output:
REPOSITORY TAG IMAGE ID CREATED SIZE
api_client latest 097b19c4deb8 27 hours ago 1.15GB
But on my docker#manager terminal, my docker image folder is empty.
The problem is that there is no repository to hold the image. The repository needs to be pulled from to a repository to each node in the Swarm before it can execute. In general you need to do the following:
Setup a repository, if you want a local repository there is a guide here, but it will be some hassle to get it up and running i and "insecure http" version. An easier way is to get yourself a free docker hub account and put your image there.
Tag your local image with the repository name. Howto is shown in the guide above.
docker tag <local image> <repository>/<image:tag>
Login to the repository (if in cloud) and push your image to the repository
docker login
docker push <repository>/<image>:<tag>
To run the image (your command)
docker service create --name "api-client" -p 4200:4200 <repository>/<image>:<tag>
Your can also try to pull an image into the local cache of a node using
docker pull <repository>/<image>:<tag>

How do I connect to a web app running on an image on docker hub?

I am learning docker, just getting my feet wet... I begin begging your pardon since I will probably using terminology badly :-(
I have successfully built my first container and run it locally.
The container image is a node.js + express web app.
Locally I run my image this way:
docker run -p 80:3000 myname/myimage
If I point my browser to the local server IP
http://192.168.1.123:80/
I access my app in all of its glory.
Then, I push it to docker hub with this command:
docker push myname/myimage
So far so good. The question is: am I supposed to be able to run my app from docker cloud, already, or should I push it to AWS, for example?
By executing docker push myname/myimage you only sent your image to docker-hub.
This image can then be run to create a container; but as is, it is not running.
You effectively will have to run it on any machine or service in order to access your app.
concerning the terminology:
you build an image, not a container
you push (or pull) an image to (from) docker-hub
you run a container from an image

Resources