How to run the Docker image using docker-compose? [duplicate] - docker

This question already has an answer here:
Deploying docker-compose containers
(1 answer)
Closed 4 years ago.
I have Flask application running under Docker Compose with 2 containers one for Flask and the other one for Nginx.
I am able to run the Flask successfully using docker-compose up --build -d command in my local machine.
What I want is, to save the images into .tar.gz file and move them to the production server and run them automatically. I have used below Bash script to save the Flask and Nginx into one image successfully.
#!/bin/bash
for img in $(docker-compose config | awk '{if ($1 == "image:") print $2;}'); do
images="$images $img"
done
docker save $images | gzip -c > flask_image.tar.gz
I then moved this image flask_image.tar.gz to my production server where Docker installed and used below command to load the image and run the containers.
docker load -i flask_image.ta.gz
This command loaded every layer and loaded the image into my production server. But containers are not up which is expected, since I used only load command.
My question is, is there any command that can load the image and up the containers automatically?
docker-compose.yml
version: '3'
services:
api:
container_name: flask
image: flask_img
restart: always
build: ./app
volumes:
- ~/docker_data/api:/app/uploads
ports:
- "8000:5000"
command: gunicorn -w 1 -b :5000 wsgi:app -t 900
proxy:
container_name: nginx
image: proxy_img
restart: always
build: ./nginx
volumes:
- ~/docker_data/nginx:/var/log/nginx/
ports:
- "85:80"
depends_on:
- api

Since you mention you already are pushing the docker image to Docker Hub, that means the image has a dockerhub tag that you can use to pull it also.
Usually I use something like this to pull images that are on a registry:
docker run --rm -d --restart=always -p 80:8080 my-dockerhub-user/my-image-name:my-tag
which would run the container in daemon mode and restart if it were to fail. That's just an example; you'd want the ports to align with whatever flask is listening on (8080 in my example) and and the what your server should be listening on (80 in my example).
The server will automatically pull the image down and run it. You can use tags to promote new images, but in that case you'll have to kill the old container as well.

Related

Why container based on ubuntu didn't run from docker-compose file, when that work for similar nginx container? [duplicate]

This question already has answers here:
Ubuntu container keep restarting
(1 answer)
How can I run bash in a new container of a docker image?
(1 answer)
Docker compose detached mode not working
(2 answers)
Closed 1 year ago.
I try to run docker container using docker-compose file instead of a long command line.
I want to run docker-compose file based on ubuntu:latest. Container created but can't run.
version: "3.9"
services:
ubuntu:
image: ubuntu:latest
container_name: nginx_from_scratch3
ports:
- "80:80"
But before I've tried add in my docker-compose file line
command: bash
And noting change. I think what after running container continue to work. But that didn't happend.
But on the other side if I use nginx image all run perfectly.
version: "3.9"
services:
nginx1:
image: nginx
container_name: nginx_from_scratch4
ports:
- "80:80"
Why docker-compose file for nginx image work, and doesn’t work for ubuntu image.
Docker container exits if task inside is done. So when you run nginx, it starts nginx automatically and keep it alive. As for ubuntu, there's no any task to keep running and container ended immediately. So if you want to keep it alive even if it does not have any job: add tail -f, like this:
version: "3.9"
services:
ubuntu:
image: ubuntu
command: tail -F anything
After you do docker container ps you will see it running.
And you can move to it with
docker exec -it container_name bash

How to add files in docker container and make them accessible from other containers?

Short version:
I want to add files in a docker container in docker-compose or Dockerfile and I want to make it accessible from other containers that I made in docker-compose file. How can I do that?
Long version:
I have a Python app in a container that uses a .csv file to generate a POJO machine learning model.
I also have a Java app in a container that uses the POJO machine learning model and appends the .csv file. The java app has a fileWatcher() method implemented.
The containers are made from the docker-compose file that calls Dockerfiles for each one of them. So I want to add them this way and not with CMD docker commands.
You can add the same named volume to different containers:
docker volume create --name volume_data
docker run -t -i -v volume_data:/public debian:jessie /bin/bash
docker run -t -i -v volume_data:/public2 debian:jessie /bin/bash
or as docker-compose.yml
services:
assets:
image: any_asset_image
volumes:
- assets:"/public/assets"
proxy:
image: nginx
volumes:
- assets
volumes:
- assets

Docker Compose does not allow to use local images

The following command fails, trying to pull image from the Docker Hub:
$ docker-compose up -d
Pulling web-server (web-server:staging)...
ERROR: repository web-server not found: does not exist or no pull access
But I just want to use a local version of the image, which exists:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
web-server staging b94573990687 7 hours ago 365MB
Why Docker doesn't search among locally stored images?
This is my Docker Compose file:
version: '3'
services:
chat-server:
image: chat-server:staging
ports:
- "8110:8110"
web-server:
image: web-server:staging
ports:
- "80:80"
- "443:443"
- "8009:8009"
- "8443:8443"
and my .env file:
DOCKER_HOST=tcp://***.***.**.**:2376
DOCKER_TLS_VERIFY=true
DOCKER_CERT_PATH=/Users/Victor/Documents/Development/projects/.../target/docker
In general, this should work as you describe it. Tried to reproduce it, but it simply worked...
Folder structure:
.
├── docker-compose.yml
└── Dockerfile
Content of Dockerfile:
FROM alpine
CMD ["echo", "i am groot"]
Build and tag image:
docker build -t groot .
docker tag groot:latest groot:staging
with docker-compose.yml:
version: '3.1'
services:
groot:
image: groot:staging
and start docker-compose:
$ docker-compose up
Creating groot_groot ...
Creating groot_groot_1 ... done
Attaching to groot_groot_1
groot_1 | i am groot
groot_groot_1 exited with code 0
Version >1.23 (2019 and newer)
Easiest way is to change image to build: and reference the Dockerfile in the relative directory, as shown below:
version: '3.0'
services:
custom_1:
build:
context: ./my_dir
dockerfile: Dockerfile
This allows docker-compose to manage the entire build and image orchestration in a single command.
# Rebuild all images
docker-compose build
# Run system
docker-compose up
In your docker-compose.yml, you can specify build: . instead of build: <username>/repo> for local builds (rather than pulling from docker-hub) - I can't verify this yet, but I believe you may be able to do relative paths for multiple services to the docker-compose file.
services:
app:
build: .
Reference: https://github.com/gvilarino/docker-workshop
March-09-2020 EDIT:
(docker version 18.09.9-ce build 039a7df,
dockercompose version 1.24.0, build 0aa59064)
I found that to just create a docker container, you can just docker-compose 'up -d' after tagging the container with a fake local registry server tag (localhost:5000/{image}).
$ docker tag {imagename}:{imagetag} localhost:5000/{imagename}:{imagetag}
You don't need to run the local registry server, but need to change the image url in dockercompose yaml file with the fake local registry server url:
version: '3'
services:
web-server:
image: localhost:5000/{your-image-name} #change from {imagename}:{imagetag} to localhost:5000/{imagename}:{imagetag}
ports:
- "80:80"
from {imagename}:{imagetag} to localhost:5000/{imagename}:{imagetag}
and just up -d
$ docker-compose -f {yamlfile}.yaml up -d
This creates the container if you already have the image (localhost:5000/{imagename}) in your local machine.
Adding to #Tom Saleeba's response,
I still got errors after tagging the container with "/"
(for ex: victor-dombrovsky/docker-image:latest)
It kept looking for the image from remote docker.io server.
registry_address/docker-image
It seems the url before "/" is the registry address and after "/" is the image name. and without "/" provided, docker-compose by default looks for the image from the remote docker.io.
It guess it's a known bug with docker-compose
I finally got it working by running the local registry, pushing the image to the local registry with the registry tag, and pulling the image from the local registry.
$ docker run -d -p 5000:5000 --restart=always --name registry registry:2
$ docker tag your-image-name:latest localhost:5000/your-image-name
$ docker push localhost:5000/your-image-name
and then change the image url in the dockerfile:
version: '3'
services:
chat-server:
image: chat-server:staging
ports:
- "8110:8110"
web-server:
image: localhost:5000/{your-image-name} #####change here
ports:
- "80:80"
- "443:443"
- "8009:8009"
- "8443:8443"
Similarly for the chat-server image.
You might need to change your image tag to have two parts separated by a slash /. So instead of
chat-server:staging
do something like:
victor-dombrovsky/chat-server:staging
I think there's some logic behind Docker tags and "one part" tags are interpreted as official images coming from DockerHub.
For me putting "build: ." did the trick. My working docker compose file looks like this,
version: '3.0'
services:
terraform:
build: .
image: tf:staging
env_file: .env
working_dir: /opt
volumes:
- ~/.aws:/.aws
You have a DOCKER_HOST entry in your .env 👀
From the looks of your .env file you seem to have configured docker-compose to use a remote docker host:
DOCKER_HOST=tcp://***.***.**.**:2376
Moreover, this .env is only loaded by docker-compose, but not docker. So in this situation your docker images output doesn't represent what images are available when running docker-compose.
When running docker-compose you're actually running Docker on the remote host tcp://***.***.**.**:2376, yet when running docker by itself you're running Docker locally.
When you run docker images, you're indeed seeing a list of the images that are stored locally on your machine. But docker-compose up -d is going to attempt to start the containers not on your local machine, but on ***.***.**.**:2376. docker images won't show you what images are available on the remote Docker host unless you set the DOCKER_HOST environment variable, like this for example:
DOCKER_HOST=tcp://***.***.**.**:2376 docker images
Evidently the remote Docker host doesn't have the web-server:staging image stored there, nor is the image available on Docker hub. That's why Docker complains it can't find the image.
Solutions
Run the container locally
If your intention was to run the container locally, then simply remove the DOCKER_HOST=... line from your .env and try again.
Push the image to a repository.
However if you plan on running the image remotely on the given DOCKER_HOST, then you should probably push it to a repository. You can create a free repository at Docker Hub, or you can host your own repository somewhere, and use docker push to push the image there, then make sure your docker-compose.yml referenced the correct repository.
Save the image, load it remotely.
If you don't want to push the image to Docker Hub or host your own repository you can also transfer the image using a combination of docker image save and docker image load:
docker image save web-server:staging | DOCKER_HOST=tcp://***.***.**.**:2376 docker image load
Note that this can take a while for big images, especially on a slow connection.
You can use pull_policy:
image: img:tag
pull_policy: if_not_present
My issue when getting this was that I had built using docker without sudo, and ran docker compose with sudo. Running docker images and sudo docker images gave me two different sets of images, where sudo docker compose up gave me access only to the latter.

Multiple docker images run from docker file

I am trying to execute multiple docker images run from single docker file with different ports.
Please advise How to execute multiple "docker run" commands from single docker file with different ports.
You want to use docker-compose it sounds like. Here is an example using nginx and redis (It's how I do it anyway)
services:
nginx:
image: nginx
ports:
- "80:80"
redis:
image: redis
ports:
- "1000:1000"
So as you can see, if I run docker-compose up, docker will spin up two containers, nginx and redis, each running off of a different port! If you don't want to you docker-compose, you can do it from docker run
docker run --name nginx -p 1000:10001
docker run --name redis -p 3333:2423
I don't 100% understand your question, but I hope this helps!

Strange way to launch a background apache/mysql docker container

I am downloaded a debian image for docker and i have created a container from it.
I haver successfully installed apache and mysql on this container (from /bin/bash).
I want to make this docker container running in background.
I have tried a lot of tutorials (i have created images with Dockerfile) but nothing really works. Apache and mysql were run as root...
So i have launched this command:
docker run -d -p 80:80 myimagefile /bin/bash -c "while true; do sleep 10; done"
Then i have attached a /bin/bash with exec command and i started manually mysql and apache2 (/etc/init.d/ scripts). When i type CTRL-D, the bash is killed but the container stills in background, with mysql and apache alive !
I am wondering if this method is correct or is it something ugly ? Is there a best way to do this ?
I do not want to write a Dockerfile that describes how to install apache and mysql. I have made my own image, with my application and all prerequisites.
I just want to start a container from my image and start automatically apache and mysql.
I have a second question: With my method, the container is not reloaded if i reboot physical computer. How can i start it automatilcy with persistence of data ?
Thanks
I would suggest using running mysql and apache in separate containers. Additionally the docker hub already has container images that you could re-use:
https://hub.docker.com/_/mysql/
The following is an example of a docker-compose file that describe how to launch Drupal
version: '2'
services:
db:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=letmein
- MYSQL_DATABASE=drupal
- MYSQL_USER=drupal
- MYSQL_PASSWORD=drupal
volumes:
- /var/lib/mysql
web:
image: drupal
depends_on:
- db
ports:
- "8080:80"
volumes:
- /var/www/html/sites
- /var/www/private
Run as follows
$ docker-compose up -d
Creating dockercompose_db_1
Creating dockercompose_web_1
Which exposes Drupal on port 8080
$ docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------
dockercompose_db_1 docker-entrypoint.sh mysqld Up 3306/tcp
dockercompose_web_1 apache2-foreground Up 0.0.0.0:8080->80/tcp
Note:
When running the drupal installer, configure it to connect to a host called "db", which is the mysql container.

Resources