How to run docker-compose with docker image? - docker

I've moved my docker-compose container from the development machine to a server using docker save image-name > image-name.tar and cat image-name.tar | docker load. I can see that my image is loaded by running docker images. But when I want to start my server with docker-compose up, it says that there isn't any docker-compose.yml. And there isn't really any .yml file. So how to do with this?
UPDATE
When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?

What you achieve with docker save image-name > image-name.tar and cat image-name.tar | docker load is that you put a Docker image into an archive and extract the image on another machine after that. You could check whether this worked correctly with docker run --rm image-name.
An image is just like a blueprint you can use for running containers. This has nothing to do with your docker-compose.yml, which is just a configuration file that has to live somewhere on your machine. You would have to copy this file manually to the remote machine you wish to run your image on, e.g. using scp docker-compose.yml remote_machine:/home/your_user/docker-compose.yml. You could then run docker-compose up from /home/your_user.
EDIT: Additional info concerning the updated question:
UPDATE When I've copied all my project files to the server (including docker-compose.yml), everything started to work. But is it normal approach and why I needed to save-load image first?
Personally, I have never used this approach of transferring a Docker image (but it's cool, didn't know it). What you typically would do is pushing your image to a Docker registry (either the official DockerHub one, or a self-hosted registry) and then pulling it from there.

Related

How to run a Docker Compose image pulled from GitHub Packages

I have pushed a Docker image on GitHub Packages and now I would like to pull it and use it.
To run the image locally, I used to go to the related folder and run it with the command docker-compose up.
However now, by pulling from GitHub Packages, I just get the Docker image without any folder and I don't know how I can run it.
By inspecting the image it has all the files related to the original folder but, when I try to run the docker-compose up ghcr.io/giuliomat/bdt-project command, I get an error saying that there is no docker-compose.yml in the directory. If I just use the command docker run ghcr.io/giuliomat/bdt-project it runs one of the two services specified in the docker-compose.yml file. How can I run the Docker Compose image correctly? Thanks in advance!
Update: I try to explain myself better. In the image there is a Dockerfile (that now I've uploaded in the question) which is used to build the web service. I developed the image locally and I have no problem running it with docker-compose up, but now I wanted to see what it has to be done in order to run it when a user pulls it from my GitHub Packages, and this is my problem. The pulled image should have all the elements needed to run but I don't know what command to use in order to tell Docker to run both services specified in the docker-compose.yml file, since when a user pulls from GitHub Packages it only gets the image and no folder where run docker-compose up.
Dockerfile:
docker-compose.yml:
content of the pulled docker image:
Update:
Docker image repository does not store yml files, therefore either you provide a README.md for the user in the image registry (with yml verbosely copy-pasted there) and/or you provide also the link to the version control repository where the rest of the files reside, so the user can clone and use docker-compose up.
docker-compose up [options] [--scale SERVICE=NUM...] [SERVICE...] means "find [service...](if specified, otherwise run all) indocker-compose.yml` in the current working directory and run it.
So if you move out of the folder with docker-compose.yml it won't pick the compose file and therefore won't work.
Also for the image using you need to specify image property of a service instead of build because build works with the Dockerfile locally and attempts to build an image instead of pulling it from GitHub Docker image registry:
web:
image: "ghcr.io/giuliomat/bdt-project:latest"
It'd be the same way you have it for redis service.
Also make sure you can pull the image locally first (otherwise docker login would be necessary prior to compose commands) by:
docker pull ghcr.io/giuliomat/bdt-project

Some questions on Docker basics?

I'm new to docker.Most of the tutorials on docker cover the same thing.I'm afraid I'm just ending up with piles of questions,and no answers really. I've come here after my fair share of Googling, kindly help me out with these basic questions.
When we install a docker,where it gets installed? Is it in our computer in local or does it happen in cloud?
Where does containers get pulled into?I there a way I can see what is inside the container?(I'm using Ubuntu 18.04)
When we pull an image.Docker image or clone a repository from
Git.where does this data get is stored?
Looks like you are confused after reading to many documents. Let me try to put this in simple words. Hope this will help.
When we install a docker,where it gets installed? Is it in our
computer in local or does it happen in cloud?
We install the docker on VM be it you on-prem VM or cloud. You can install the docker on your laptop as well.
Where does containers get pulled into?I there a way I can see what is
inside the container?(I'm using Ubuntu 18.04)
This question can be treated as lack of terminology awareness. We don't pull the container. We pull the image and run the container using that.
Quick terminology summary
Container-> Containers allow you to easily package an application's code, configurations, and dependencies into a template called an image.
Dockerfile-> Here you mention your commands or infrastructure blueprint.
Image -> Image gets derived from Dockerfile. You use image to create and run the container.
Yes, you can log inside the container. Use below command
docker exec -it <container-id> /bin/bash
When we pull an image.Docker image or clone a repository from
Git.where does this data get is stored?
You can pull the opensource image from Docker-hub
When you clone the git project which is docerized, you can look for Dockerfile in that project and create the your own image by build it.
docker build -t <youimagenae:tag> .
When you build or pull the image it get store in to your local.
user docker images command
Refer the below cheat-sheet for more commands to play with docker.
The docker daemon gets installed on your local machine and everything you do with the docker cli gets executed on your local machine and containers.
(not sure about the 1st part of your question). You can easily access your docker containers by docker exec -it <container name> /bin/bash for that you will need to have the container running. Check running containers with docker ps
(again I do not entirely understand your question) The images that you pull get stored on your local machine as well. You can see all the images present on your machine with docker images
Let me know if it was helpful and if you need any futher information.

Install Docker image from a local Dockerfile

I'm working from my local laptop and preparing a Dockerfile that I want to use for deployment later on the server. The problem is server contains only docker client/daemon, but has no connectivity to official docker registry and neither it provides it's own image registry.
Is it possible to build my image locally, ship it to the server and run a container on it without going through the trouble of creating my own image registry?
You can save an image using docker save imagename which creates a tarfile and then use docker load to create an image on the server from that tarfile.
Don't confuse this with docker export which creates a tar from a container. See Difference between save and export in Docker. As shown in that link an exported container might be smaller because it flattens layers. If size matters you might consider commiting a container and exporting it right afterwards.

Docker - is it necessary to push images to remote server?

I have successfully built some Docker images:
Now I would like to start my microservices by docker-compose, unfortunatelly I am unable to pull those images i.e. repository callista/discovery-server not found: does not exist or no pull access I solved this error by logging into my DockerHub account and pushining those images to remote server. But it seems to me like a little overkill to send such larges images (which are likely to change pretty soon) over the Internet over and over again twice (push&pull).
Is it possible to configure Docker to install those images locally and not to pull from remote server?
I use Docker 1.8 and work on Windows 10.
Do you need to run this images in a server different from the one you build then?
If you need you have some alternatives:
As #engineer-dollery said, you can run a registry into your network, than you would not need to send it over the internet, only in your network. Docs: https://docs.docker.com/registry/deploying/
You could use the docker save and docker import to move then around too. Docs: https://docs.docker.com/engine/reference/commandline/save/
But if the server you run the images is the same you build then...
...than you could just add the tag image to your docker-compose services, and do a docker-compose build, as #lauri said, but with the image docker-compose will create a image with that name after the build, and then you could do docker run using than. Or do a docker-compose up --build so it will always build than again if something changes into the Dockerfile
If you define build option in docker-compose.yml, you should be able to build images locally with Docker Compose and then it uses those images without pulling. By default Docker Compose builds images if they are not found locally. If you want to rebuild images just add --build option docker-compose up command docker-compose up --build
Docker Compose build reference:
https://docs.docker.com/compose/compose-file/#build

Dealing with data in Docker Containers with Gitlab-Ci

So I am using gitlab-ci to deploy my websites in docker containers, because the gitlab-ci docker runner doesn't seem to do what I want to do I am using the shell executor and let it run docker-compose up -d. Here comes the problem.
I have 2 volumes in my docker-container. ./:/var/www/html/ (which is the content of my git repo, so files I want to replace on build) and a mount that is "inside" of this mount /srv/data:/var/www/html/software/permdata (which is a persistent mount on my server).
When the gitlab-ci runner starts it tries to remove all files while the container is running, but because of this mount in mount it gets a device busy and aborts. So I have to manually stop and remove the container before I can run my build (which kind of defeats the point of build automation).
Options I thought about to fix this problem:
stop and remove the container before gitlab-ci-multi-runner starts (seems not possible)
add the git data to my docker container and only mount my permdata (seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile)
Option 2 would be ideal because then it would also sort out my issues with permissions on the files.
Maybe someone has gone through the same problem and could give me an advice
seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile
That's correct. The Compose file is not meant to replace the Dockerfile, it's meant to run multiple images for an application or project.
You can modify the Dockerfile to copy in the git files.

Resources