Is it possible to make update the docker image after pushing it to dockerhub/ACR/etc at runtime as docker cp command works on localhost - docker

I have a angular application and I have created an docker image of that, I have published it on Azure Container Register(ACR).
I want to pull the image from ACR and deploy it to Azure App service, and change the images, css files from the docker container at runtime.
I want to know if it is possible to update the images/css file at runtime as we do using docker cp command on localhost.

I would suggest using CI/CD for this purpose.
Just create a webhook in ACR. So, whenever the image gets updates, the WebApp will automatically get "notified" and pull in the new change.

Related

How to deploy a docker image from dockerhub to a server with github actions

I want to implement CI/CD for my application, so far I have managed to build and upload my image on docker hub with GitHub actions. Now I need a way to pull that image on my VPS and run the docker image. I do not know how to achieve that, I tried multiple youtube videos but none show that.
Could someone point me to the right direction?
What I have done is:
Setup a webhook on vps, this serves as a webhook server, define a endpoint and a script to redeploy (Will be executed when endpoint is called and met)
In your GitHub ctions flow, add new a step, send a request to this webhook server endpoint
Usually flow after image exists in Dockerhub is:
you use the docker login command to log in to the user which has permissions to pull the image
You can either pre pull the image using the docker pull command or you can straight on just use the docker run command and it will pull the image if not existing and run it.
For an example with Nginx, the image resides in Dockerhub and with help of official docs you can see
docker run --name mynginx1 -p 80:80 -d nginx
the command which will pull the Nginx image (latest in this case) and run the container with the name of mynigninx1 and expose the port on host 80 and map it to port 80 inside the container.
There is a docker image you can run on your server to watch your working or selected docker containers and when there is a new push to the docker hub registry then it will update your docker image of your project
Its called watchtower
containrrr/watchtower

Gitlab-CI error upon deploying Docker Image on swarm mode

Hi i have problem with updating / changing image of my service on the server running Docker swarm mode.
Here is the process of manually updating the service.
push the project to gitlab from local machine.
pull the project from gitlab in server.
build a Docker image as my-project:latest
tag my-project:latest as registry.gitlab.com/my-group/my-project:staging
i push the image using docker push registry.gitlab.com/my-group/my-project:staging
i run docker stack deploy -c ~/docker-stack.yml api --with-registry-auth
and it works fine.
However if i move the codes above into a gitlab-ci.yml despite of ending the job successfully i get an error when it is trying to update the service.
Updating service api_backend (id: r4gqmil66kehzf0oehzqk57on)
image registry.gitlab.com/my-group/my-project:staging could not be accessed on a registry to record
its digest. Each node will access registry.gitlab.com/my-group/my-project:staging independently,
possibly leading to different nodes running different
versions of the image.
Also the gitlab runner is executing commands in Shell mode.
I have tried different solutions as you can see i'm even using the --with-registry-auth flag.
To summarize this:
everything works fine if i enter the codes manually but i get an error when i use gitlab-ci.yml.

How can I docker commit azure container instance to azure container registry

We have ansible configured to deploy our various applications on IIS environment. I am trying to create a docker image of deployed applications so that I can just start up containers as we need for testing and otherwise.
I am planning to build on the Windows IIS image, start the container on azure, run our ansible to install everything on the server, then save the image on container.
I cannot find any documentation on how I can docker commit the container image into our private azure container registry.
Is it possible?
If you have an existing Docker registry in azure you should be able to use the az acr login --name myregistry command to authenticate to it https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli. Make sure you have a registry created for the container image you want to push up.
Next, you can run the container in azure and do all the installation you want. SSH or RDP into the instance in Azure that is running this container. Now run docker ps and find the container id for the correct container. Next, use docker commit <container id> myregistry.azurecr.io/samples/nginx.
Then, just docker push myregistry.azurecr.io/samples/nginx
Also not sure what your use case is, but starting a container in order to modify and commit it in that way seems like an atypical use case for Docker since the build isn't reproducible via the Dockerfile. Looks like there are ways to replace Dockerfiles using Ansible playbooks with something like ansible-containers https://docs.ansible.com/ansible-container/ so you might want to take a look at that(I've never used this tool).

Create a new image from a container’s changes in Google Cloud Container

I have an Image which i should add a dependency to it. Therefore I have tried to change the image when is running on the container and create new Image.
I have follow this article with the following commands after :
kubectl run my-app --image=gcr.io/my-project-id/my-app-image:v1 --port 8080
kubectl get pods
kubectl exec -it my-app-container-id -- /bin/bash
then in the shell of container, i have installed the dependency using "pip install NAME_OF_Dependncy".
Then I have exited from the shell of container and as it have been explained in the article, i should commit the change using this command :
sudo docker commit CONTAINER_ID nginx-template
But I can not find the corresponding command for Google Kubernetes Engine with kubectl
How should i do the commit in google container engine?
As with K8s Version 1.8. There is no way to do Hot Fix changes directly to the images.For example, Committing new image from running container. If you still change or add something by using exec it will stay until the container is running. It's not best practice in K8s eco-system.
The recommended way is to use Dockerfile and customise the images according to the necessity and requirements.After that, you can push that images to the registry(public/ private ) and deploy it with K8s manifest file.
Solution to your issue
Create a Dockerfile for your images.
Build the image by using Dockerfile.
Push the image to the registry.
write the deployment manifest file as well service manifest file.
apply the manifest file to the k8s cluster.
Now If you want to change/modify something, you just need to change/modify the Dockerfile and follow the remaining steps.
As you know that containers are a short living creature which does not have persist changed behaviour ( modified configuration, changing file system).Therefore, It's better to give new behaviour or modification at the Dockerfile.
Kubernetes Mantra
Kubernetes is Cloud Native product which means it does not matter whether you are using Google Cloud, AWS or Azure. It needs to have consistent behaviour on each cloud provider.

Docker - is it necessary to push images to remote server?

I have successfully built some Docker images:
Now I would like to start my microservices by docker-compose, unfortunatelly I am unable to pull those images i.e. repository callista/discovery-server not found: does not exist or no pull access I solved this error by logging into my DockerHub account and pushining those images to remote server. But it seems to me like a little overkill to send such larges images (which are likely to change pretty soon) over the Internet over and over again twice (push&pull).
Is it possible to configure Docker to install those images locally and not to pull from remote server?
I use Docker 1.8 and work on Windows 10.
Do you need to run this images in a server different from the one you build then?
If you need you have some alternatives:
As #engineer-dollery said, you can run a registry into your network, than you would not need to send it over the internet, only in your network. Docs: https://docs.docker.com/registry/deploying/
You could use the docker save and docker import to move then around too. Docs: https://docs.docker.com/engine/reference/commandline/save/
But if the server you run the images is the same you build then...
...than you could just add the tag image to your docker-compose services, and do a docker-compose build, as #lauri said, but with the image docker-compose will create a image with that name after the build, and then you could do docker run using than. Or do a docker-compose up --build so it will always build than again if something changes into the Dockerfile
If you define build option in docker-compose.yml, you should be able to build images locally with Docker Compose and then it uses those images without pulling. By default Docker Compose builds images if they are not found locally. If you want to rebuild images just add --build option docker-compose up command docker-compose up --build
Docker Compose build reference:
https://docs.docker.com/compose/compose-file/#build

Resources