I want to run an application using docker-compose on a Linux server that already has the images stored locally.
The application consists of two services. Running docker images on the server indicates that the images do in fact exist:
REPOSITORY TAG IMAGE ID CREATED SIZE
app_nginx latest b8362b71f3da About an hour ago 107MB
app_dash_alert_app latest 432f03c01dc6 About an hour ago 1.67GB
Here is my docker-compose.yml:
version: '3'
services:
dash_alert_app:
container_name: dash_alert_app
restart: always
build: ./dash_alert_app
ports:
- "8000:8000"
command: gunicorn -w 1 -b :8000 dash_histogram_daily_counts:server
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- "80:80"
depends_on:
- dash_alert_app
When I run, docker-compose pull it seems to be able to see the images, and pulls them in:
$ sudo docker-compose pull
Pulling dash_alert_app ... done
Pulling nginx ... done
But when I try to spin up the containers I get the following suggesting that the images still need to be built:
$ docker-compose up -d --no-build
ERROR: Service 'dash_alert_app' needs to be built, but --no-build was passed.
Note that I've configured docker to store images in /mnt/data/docker - here is my /etc/docker/daemon.json file:
{
"graph": "/mnt/data/docker",
"storage-driver": "overlay",
"bip": "192.168.0.1/24"
}
Here is my folder structure:
.
│ docker-compose.yml
└───dash_alert_app
└───nginx
Why is docker-compose not using the images that exist locally?
Looks like you forgot to specify the image key. Also, do you really have to build the image again with docker-compose build or are the existing ones sufficient? If they are, please try this:
version: '3'
services:
dash_alert_app:
image: app_dash_alert_app
container_name: dash_alert_app
restart: always
ports:
- "8000:8000"
command: gunicorn -w 1 -b :8000 dash_histogram_daily_counts:server
nginx:
image: app_nginx
container_name: nginx
restart: always
ports:
- "80:80"
depends_on:
- dash_alert_app
Related
I have a docker-compose.yml file like so:
version: "3.8"
services:
app:
build:
context: .
dockerfile: Dockerfile
image: darajava/audio-diary
ports:
- 80:3001
volumes:
- .:/app
- "/app/node_modules"
depends_on:
- db
container_name: "soliloquy_express"
db:
image: mariadb:latest
restart: always
environment:
- MYSQL_DATABASE=soliloquy
- MYSQL_USER=soliloquy
- MYSQL_PASSWORD=password
- MYSQL_ROOT_PASSWORD=password
volumes:
- ../db_data:/var/lib/mysql
container_name: "soliloquy_db"
I'm planning to add an nginx service here too.
I use
docker-compose build
and
docker-compose push
to push to Docker Hub, which I can pull from (from my EC2 instance) using:
docker pull darajava/audio-diary:latest
However, when I run that image, it only runs the app service (I think).
using
docker-compose pull darajava/audio-diary:latest
does not work and leads to an error regarding a missing docker-compose.yml file.
So I have 2 questions:
Is there a way I can pull a whole docker-compose config, with app, db, and other services and pull and run it on my EC2 instance just by pulling from Docker Hub? or do I have the wrong use case for Docker Compose?
I have an app with separated frontend and backend, each one is a subfolder. I have dockerized the front and the back separately in their folders, respectively.
Now, I'm trying to run them in the same network by using docker-compose in the root folder. The build is done successfully, but when I run it, the front container works just fine, but the back container exits with code 0.
Maybe it's worth mentioning that the container of the back is a done with a docker-compose too.
Can you help me please?
Here's how the docker-compose.yml looks like in the root folder
version: '3.7'
services:
back:
build: ./backend/
ports:
- "8000:8000"
front:
build: ./frontend/
ports:
- "80:3000"
output:
app_back_1 exited with code 0
front_1 | INFO: Accepting connections at http://localhost:3000.
Here's the docker-compose file of the backend:
version: '3.5'
services:
app:
build:
context: .
command: gunicorn backend.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_data:/vol/static
ports:
- "8000:8000"
restart: always
env_file:
- .env
depends_on:
- app-db
app-db:
image: postgres:12-alpine
ports:
- "5432:5432"
restart: always
volumes:
- postgres_data:/var/lib/postgresql/data:rw
env_file:
- .env
proxy:
build: ./proxy
volumes:
- static_data:/vol/static
- media_data:/vol/media
restart: always
ports:
- "8008:80"
depends_on:
- app
volumes:
static_data:
media_data:
postgres_data:
If the container runs well, It should run well with identical docker image that you have built. Try docker-compose up --build --force-recreate --no-deps to recreate everything from scratch with no cache, so then if you have error in your source code the error will throw for both standalone container and compose.
I have 2 docker-compose files that build a dockerfile, and i want join those docker-compose files
so, i created other docker-compose that goes up these 2 images
version: "3.4"
services:
frontend:
image: frontend-image
depends_on:
- backend
ports:
- "3000:80"
networks:
- teste-network
backend:
image: backend-image
ports:
- "5001:80"
networks:
- test-network
networks:
test-network:
driver: bridge
but, this docker-compose file not build the images
then i created a bash command that build these images
bash -c "docker-compose -f ./frontend/docker/docker-compose.yml build
&& docker-compose -f ./backend/docker/docker-compose.yml build"
I want to run this script before up containers, just typing docker-compose up
i assume that you have 2 dockerfiles - one for the frontend and the other for the backend, where each of which resides in the corresponding folder from your post, that is:
frontend/docker/Dockerfile
backend/docker/Dockerfile
then you can leverage docker-compose to build and run your images. all you have to do is to tell docker-compose where are the dockerfiles, which you can do by utilizing the build configuration.
version: "3.4"
services:
frontend:
image: frontend-image
build: ./frontend/docker
depends_on:
- backend
ports:
- "3000:80"
networks:
- test-network
backend:
image: backend-image
build: ./backend/docker
ports:
- "5001:80"
networks:
- test-network
networks:
test-network:
driver: bridge
then running docker-compose up frontend will build the docker images (if they do no exist), and then start them.
I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.
I was using docker-compose, but when I tried to build it again, this error shows, I have build this docker-compose multiple times:
ERROR: Service 'api' failed to build: max depth exceeded
I tried to execute docker system prune to clean my containers, but it didn't work.
docker-compose.yml
version: "3"
services:
client:
container_name: my_client
image: mhart/alpine-node:12
build: ./client
restart: always
ports:
- "3000:3000"
working_dir: /client
volumes:
- ./client:/client
entrypoint: ["npm", "start"]
links:
- api
networks:
- my_network
api:
container_name: my_api
build: ./api
restart: always
ports:
- "9000:9000"
environment:
DB_HOSTNAME: mysql
working_dir: /api
volumes:
- ./api:/api
depends_on:
- mysql
networks:
- my_network
mysql:
container_name: my_mysql
build: ./db
restart: always
volumes:
- /var/lib/mysql
- ./db:/db
ports:
- "3307:3306"
environment:
- MYSQL_ROOT_PASSWORD=n
- MYSQL_USER=n
- MYSQL_PASSWORD=n
- MYSQL_DATABASE=n
networks:
- my_network
command: '--default-authentication-plugin=mysql_native_password'
networks:
my_network:
driver: bridge
this is the Dockerfile:
FROM mhart/alpine-node:12
WORKDIR /api
COPY package*.json /api/
RUN npm i -G nodemon
RUN npm install
COPY . /api/
EXPOSE 9000
CMD ["npm", "run", "dev"]
any help is appreciated.
So, I figure out, I just needed to execute docker system prune -a to remove any stopped container. Now --build is working again.
This command deleted all my local docker images related to my dockerfile. After building it so many times my local storage has reached a limited, thus the error max depth exceeded.
Max depth doesn't indicate an out-of-storage-capacity error (though a prune could accidentally fix it).
Rather it indicates that the api image that you were building had too many layers.
A plausible theory is that you have a recursion caused by having this in your compose file:
image: mhart/alpine-node:12
build: ./client
and this in a Dockerfile
FROM mhart/alpine-node:12
(I'm assuming the Dockerfile in ./client is also FROM the same image).
Your build is essentially adding a few layers onto your local mhart/alpine-node:12 image every time you run it (you can confirm by running docker history mhart/alpine-node:12).
If so, you should probably rename the image in your compose file.