Docker - is it necessary to push images to remote server? - docker

I have successfully built some Docker images:
Now I would like to start my microservices by docker-compose, unfortunatelly I am unable to pull those images i.e. repository callista/discovery-server not found: does not exist or no pull access I solved this error by logging into my DockerHub account and pushining those images to remote server. But it seems to me like a little overkill to send such larges images (which are likely to change pretty soon) over the Internet over and over again twice (push&pull).
Is it possible to configure Docker to install those images locally and not to pull from remote server?
I use Docker 1.8 and work on Windows 10.

Do you need to run this images in a server different from the one you build then?
If you need you have some alternatives:
As #engineer-dollery said, you can run a registry into your network, than you would not need to send it over the internet, only in your network. Docs: https://docs.docker.com/registry/deploying/
You could use the docker save and docker import to move then around too. Docs: https://docs.docker.com/engine/reference/commandline/save/
But if the server you run the images is the same you build then...
...than you could just add the tag image to your docker-compose services, and do a docker-compose build, as #lauri said, but with the image docker-compose will create a image with that name after the build, and then you could do docker run using than. Or do a docker-compose up --build so it will always build than again if something changes into the Dockerfile

If you define build option in docker-compose.yml, you should be able to build images locally with Docker Compose and then it uses those images without pulling. By default Docker Compose builds images if they are not found locally. If you want to rebuild images just add --build option docker-compose up command docker-compose up --build
Docker Compose build reference:
https://docs.docker.com/compose/compose-file/#build

Related

How to run a Docker Compose image pulled from GitHub Packages

I have pushed a Docker image on GitHub Packages and now I would like to pull it and use it.
To run the image locally, I used to go to the related folder and run it with the command docker-compose up.
However now, by pulling from GitHub Packages, I just get the Docker image without any folder and I don't know how I can run it.
By inspecting the image it has all the files related to the original folder but, when I try to run the docker-compose up ghcr.io/giuliomat/bdt-project command, I get an error saying that there is no docker-compose.yml in the directory. If I just use the command docker run ghcr.io/giuliomat/bdt-project it runs one of the two services specified in the docker-compose.yml file. How can I run the Docker Compose image correctly? Thanks in advance!
Update: I try to explain myself better. In the image there is a Dockerfile (that now I've uploaded in the question) which is used to build the web service. I developed the image locally and I have no problem running it with docker-compose up, but now I wanted to see what it has to be done in order to run it when a user pulls it from my GitHub Packages, and this is my problem. The pulled image should have all the elements needed to run but I don't know what command to use in order to tell Docker to run both services specified in the docker-compose.yml file, since when a user pulls from GitHub Packages it only gets the image and no folder where run docker-compose up.
Dockerfile:
docker-compose.yml:
content of the pulled docker image:
Update:
Docker image repository does not store yml files, therefore either you provide a README.md for the user in the image registry (with yml verbosely copy-pasted there) and/or you provide also the link to the version control repository where the rest of the files reside, so the user can clone and use docker-compose up.
docker-compose up [options] [--scale SERVICE=NUM...] [SERVICE...] means "find [service...](if specified, otherwise run all) indocker-compose.yml` in the current working directory and run it.
So if you move out of the folder with docker-compose.yml it won't pick the compose file and therefore won't work.
Also for the image using you need to specify image property of a service instead of build because build works with the Dockerfile locally and attempts to build an image instead of pulling it from GitHub Docker image registry:
web:
image: "ghcr.io/giuliomat/bdt-project:latest"
It'd be the same way you have it for redis service.
Also make sure you can pull the image locally first (otherwise docker login would be necessary prior to compose commands) by:
docker pull ghcr.io/giuliomat/bdt-project

How can I make an image from the current environment to share docker hub?

I have some services running when I run docker-compose up command. Now I want to make an image from the current environment and share it with docker hub so that every time I can use docker pull/run my_own_image command from the docker hub.
Is there any way to do that?
Pretty much anything that you can do with images or containers with Docker, you can do with Compose. In your case, since you can push your custom image to your Docker Hub registry using docker image push (or docker push) command, you can do the same with Compose.
As for Compose, you use docker-compose push (no surprises there – consistency between APIs/CLIs).
Tip: when in doubt, use --help. It's the best way (next to Google) to explore CLI. If not sure what are available commands/options for Compose, just type docker-compose --help. If you want to see available options for push command (for example), use docker-compose push --help.

How do I build the images locally before deploying remotely via docker-compose so I don't have to send a big build context over the internet?

I have been deploying remotely with docker-compose -H $docker_host up --build, but the directory is large and even though I only need 2 small binaries for the images everything is getting sent to the remote server.
The slowness is having an impact on my performance, so I was wondering if there was a way to make compose build the images locally and then send those instead of the whole directory.
A .dockerignore file tells docker what files to ignore when sending the build context during a docker build.
You can build the images locally, but you will need a Docker daemon to do that. After building an image you would then push it to a Docker registry (you could also use a public registry, like the Docker hub and then you could download the image from that registry on the target server.
You can head over to the docker docs to learn more about the architecture or ask a more specific question - I'll try to answer.

How to pull new docker images and restart docker containers after building docker images on gitlab?

There is an asp.net core api project, with sources in gitlab.
Created gitlab ci/cd pipeline to build docker image and put the image into gitlab docker registry
(thanks to https://medium.com/faun/building-a-docker-image-with-gitlab-ci-and-net-core-8f59681a86c4).
How to update docker containers on my production system after putting the image to gitlab docker registry?
*by update I mean:
docker-compose down && docker pull && docker-compose up
Best way to do this is to use Image puller, lot of open sources are available, or you can write your own on the Shell. There is one here. We use swarm, and we use this hook concept to be triggered from our CI-CD pipeline. Once our build stage is done, we http the hook url, and the docker pulls the updated image. One disadvantage with this is you need a daemon to watch your hook task, that it doesnt crash or go down. So my suggestion is to run this hook task as a docker container with restart-policy as RestartAlways

How to run docker-compose with docker image?

I've moved my docker-compose container from the development machine to a server using docker save image-name > image-name.tar and cat image-name.tar | docker load. I can see that my image is loaded by running docker images. But when I want to start my server with docker-compose up, it says that there isn't any docker-compose.yml. And there isn't really any .yml file. So how to do with this?
UPDATE
When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
What you achieve with docker save image-name > image-name.tar and cat image-name.tar | docker load is that you put a Docker image into an archive and extract the image on another machine after that. You could check whether this worked correctly with docker run --rm image-name.
An image is just like a blueprint you can use for running containers. This has nothing to do with your docker-compose.yml, which is just a configuration file that has to live somewhere on your machine. You would have to copy this file manually to the remote machine you wish to run your image on, e.g. using scp docker-compose.yml remote_machine:/home/your_user/docker-compose.yml. You could then run docker-compose up from /home/your_user.
EDIT: Additional info concerning the updated question:
UPDATE When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
Personally, I have never used this approach of transferring a Docker image (but it's cool, didn't know it). What you typically would do is pushing your image to a Docker registry (either the official DockerHub one, or a self-hosted registry) and then pulling it from there.

Resources