I have a docker-compose.yml file like so:
version: "3.8"
services:
app:
build:
context: .
dockerfile: Dockerfile
image: darajava/audio-diary
ports:
- 80:3001
volumes:
- .:/app
- "/app/node_modules"
depends_on:
- db
container_name: "soliloquy_express"
db:
image: mariadb:latest
restart: always
environment:
- MYSQL_DATABASE=soliloquy
- MYSQL_USER=soliloquy
- MYSQL_PASSWORD=password
- MYSQL_ROOT_PASSWORD=password
volumes:
- ../db_data:/var/lib/mysql
container_name: "soliloquy_db"
I'm planning to add an nginx service here too.
I use
docker-compose build
and
docker-compose push
to push to Docker Hub, which I can pull from (from my EC2 instance) using:
docker pull darajava/audio-diary:latest
However, when I run that image, it only runs the app service (I think).
using
docker-compose pull darajava/audio-diary:latest
does not work and leads to an error regarding a missing docker-compose.yml file.
So I have 2 questions:
Is there a way I can pull a whole docker-compose config, with app, db, and other services and pull and run it on my EC2 instance just by pulling from Docker Hub? or do I have the wrong use case for Docker Compose?
Related
As far as I understand, only images can be uploaded to the docker hub, which then need to be spooled, and can be launched via docker run. But what if I have several images that I run through docker compose? I have a site on next.js and nginx. There is such docker-compose.yml
version: '3'
services:
nextjs:
build: ./
networks:
- site_network
nginx:
build: ./.docker/nginx
ports:
- 80:80
- 443:443
networks:
- site_network
volumes:
- /etc/ssl/certs:/etc/ssl/certs:ro
depends_on:
- nextjs
networks:
site_network:
driver: bridge
If I do a git clone of the repository on the server, and do docker-compose up --build -d, then everything works. I want to automate everything via gitlab ci/cd. I found an article that describes the procedure for installing a runner on a server + a description of the .gitlab-ci.yml file that creates an image, uploads it to the docker hub, and then downloads it on the server and launches it using docker run. Here I see this approach: in gitlab-ci.yml I make several images that I upload to the hub. Next, I upload a file from the docker-compose.yml repository to the server, which will have the following structure:
version: '3'
services:
nextjs:
image: registry.gitlab.com/path_to_project/next:latest
networks:
- site_network
nginx:
image: registry.gitlab.com/path_to_project/nginx:latest
ports:
- 80:80
- 443:443
networks:
- site_network
volumes:
- /etc/ssl/certs:/etc/ssl/certs:ro
depends_on:
- nextjs
networks:
site_network:
driver: bridge
How correct is this approach? Maybe there is a more reliable and better way? Advanced stack (kubernetes, etc.) not yet considered, I want to learn all the basics first before moving on.
Hello I have multiple projects that have there own dockerfiles and docker-compose.yml files. I am not too familiar on how I would setup the networking between these projects. So they could share the same databases and the project would be able to talk to on another. Does anyone have suggests?
Right now, In one of the projects I am just pulling in all the dockerfile into a docker-compose.yml and setting-up all the services I need from all the other projects in this yml file. I do not think this is ideal and there is a high level a coupling between the services.
version: "3"
services:
db:
image: mysql/mysql-server
ports:
- 3306:3306
mongo:
image: mongo
restart: always
rails_app:
build:
context: ${RAILS_APP_PATH}
dockerfile: Dockerfile
volumes:
- ${RAILS_APP_PATH}:/application
ports:
- 4000:4000
depends_on:
- db
- mongo
links:
- db
- mongo
frontend:
build:
context: ${FRONTEND_PATH}
ports:
- ${EXPOSED_PORT}:${EXPOSED_PORT}
depends_on:
- go_services
links:
- go_services
go_services:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
depends_on:
- db
- mongo
- rails_app
links:
- db
- mongo
- rails_app
The trick is to use an External Docker Network.
Set up the network and the Containers can talk to each other by their Service Names.
Setup the the network on the Host
docker network create my-net
First compose file
version: '3.9'
services:
mymongo:
image: mongo:latest
restart: unless-stopped
container_name: mongo
environment:
MONGO_INITDB_DATABASE: mymongo
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
volumes:
- ./database:/data/db
ports:
- "27017:27017"
networks:
default:
external: true
name: my-net
Second compose file
version: '3.9'
services:
ui:
build:
context: ./build
dockerfile: Dockerfile_ui
image: ui
restart: "no"
container_name: ui
ports:
- "8005:3000"
command: ["npm", "start"]
networks:
default:
external: true
name: my-net
You can do this without any special Compose setup, if:
each project is self-contained (they do not share databases)
the service locations are configurable via environment variables
you don't mind communicating via the host
If you're thinking about scaling up this project at all, this approach can look attractive. It will work even if you're running each Compose file on a different host, and it translates well into clustered environments like Kubernetes.
Go ahead and break up your Compose file into several independent ones:
# rails/docker-compose.yml
version: '3.8'
services:
db:
image: mysql/mysql-server
app:
build: .
ports: ['4000:4000']
depends_on: [db]
# go/docker-compose.yml
services:
mongo:
image: mongo
service:
build: .
ports: ['8080:8080']
depends_on: [mongo]
environment:
- RAILS_APP_URL
The very last line here passes the RAILS_APP_URL environment variable from the host environment into the container.
You can start the Rails application independently:
docker-compose -f ./rails/docker-compose.yml up -d
You need to find some hostname where the container can call back to the host. On MacOS and Windows hosts, Docker provides a special hostname host.docker.internal for this. You can then connect the client container to the published port of its server:
export RAILS_APP_URL=http://host.docker.internal:4000
docker-compose -f ./go/docker-compose.yml up
If you're doing development, you can run the service you're working on locally, and its dependencies in containers, and point the environment variable at the container
go build -o ./server ./cmd/server
export RAILS_APP_URL=http://localhost:4000
./server
If you want to run this setup on multiple hosts but without using a dedicated cluster manager like Docker Swarm or Kubernetes, set the environment variable to point at the DNS name of the host running the service. If you did want to translate this to Kubernetes, a Helm "chart" would be analogous, containing the Deployment, Service, etc. and dependencies for a single component, and you could configure the other service's URL through Helm values.
I have a dockerimage on a gitlab registry.
when I (after login on a target machine)
docker run -d -p 8081:8080/tcp gitlab.somedomain.com:5050/root/app
the laravel app is available and running and reachable. Things like php artisan config:clear are working. when I enter the container everything looks fine.
But I don't have any services running. So I had the idea to create a yml file to docker-compose run to set things up in docker-compose-gitlab.yml
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
volumes:
- .:/application
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
calling docker-compose --verbose -f docker-compose-gitlab.yml up shows me that the mysql service is created and working, the app seems also be creeated but then fails ... exiting with code 0 - no further message.
If I add commands in my yml like php artisan config:clear the error gets even unclearer for me: it says it cannot find artisan and it seems as if the command is executed outside the container ... exiting with code 1. (artisan is a helper and executed via php)
When I call the docker-compose with -d and then do docker ps I can only see mysql running but not the app.
When I use both strategies, the problem is, the two container do not share a common network and can so not work together.
What did I miss? Is this the wrong strategy?
The problem is, that I let a volume directive left over which overwrites my entier application with an empty directory.
You can just leave that out.
version: '3'
services:
mysql:
image: mysql:5.7
container_name: my-mysql
environment:
- MYSQL_ROOT_PASSWORD=***
- MYSQL_DATABASE=dbname
- MYSQL_USER=username
- MYSQL_PASSWORD=***
volumes:
- ./data/mysql:/var/lib/mysql
ports:
- "3307:3306"
application:
image: gitlab.somedomain.com:5050/root/app:latest
build:
context: .
dockerfile: ./Dockerfile
container_name: my-app
ports:
- "8081:8080"
## volumes:
## - .:/application ## this would overwrite the app
env_file: .env.docker
working_dir: /application
depends_on:
- mysql
links:
- mysql
You can debug the network of the containers listing the networks with docker network ls
then when the list is shown inspect the compose network with docker inspect <ComposeNetworkID>
Once you are shure that your services are not in the same network, remove your containers and recreate it again with docker-compose -f docker-compose-gitlab.yml up
If you notice they are in the same network try to use the container name instead localhost to reach each other, if it is the case.
I want to run an application using docker-compose on a Linux server that already has the images stored locally.
The application consists of two services. Running docker images on the server indicates that the images do in fact exist:
REPOSITORY TAG IMAGE ID CREATED SIZE
app_nginx latest b8362b71f3da About an hour ago 107MB
app_dash_alert_app latest 432f03c01dc6 About an hour ago 1.67GB
Here is my docker-compose.yml:
version: '3'
services:
dash_alert_app:
container_name: dash_alert_app
restart: always
build: ./dash_alert_app
ports:
- "8000:8000"
command: gunicorn -w 1 -b :8000 dash_histogram_daily_counts:server
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- "80:80"
depends_on:
- dash_alert_app
When I run, docker-compose pull it seems to be able to see the images, and pulls them in:
$ sudo docker-compose pull
Pulling dash_alert_app ... done
Pulling nginx ... done
But when I try to spin up the containers I get the following suggesting that the images still need to be built:
$ docker-compose up -d --no-build
ERROR: Service 'dash_alert_app' needs to be built, but --no-build was passed.
Note that I've configured docker to store images in /mnt/data/docker - here is my /etc/docker/daemon.json file:
{
"graph": "/mnt/data/docker",
"storage-driver": "overlay",
"bip": "192.168.0.1/24"
}
Here is my folder structure:
.
│ docker-compose.yml
└───dash_alert_app
└───nginx
Why is docker-compose not using the images that exist locally?
Looks like you forgot to specify the image key. Also, do you really have to build the image again with docker-compose build or are the existing ones sufficient? If they are, please try this:
version: '3'
services:
dash_alert_app:
image: app_dash_alert_app
container_name: dash_alert_app
restart: always
ports:
- "8000:8000"
command: gunicorn -w 1 -b :8000 dash_histogram_daily_counts:server
nginx:
image: app_nginx
container_name: nginx
restart: always
ports:
- "80:80"
depends_on:
- dash_alert_app
I have this docker-compose.yml:
version: '2'
services:
db:
# standard postgres
php:
build:
context: ./
dockerfile: ./docker/php-fpm/Dockerfile.deploy
image: registry.gitlab.com/xxx/php:$CI_COMMIT_SHA
volumes:
- symfonydata:/var/www/symfony
nginx:
build: ./docker/nginx
image: registry.gitlab.com/xxx/nginx:$CI_COMMIT_SHA
ports:
- "80:80"
volumes_from:
- php
links:
- php
volumes:
symfonydata:
In the php-Dockerfile, I add my php code to the volume like:
FROM php:7.1-fpm
COPY . /var/www/symfony/
WORKDIR /var/www/symfony
Ok this works, after docker-compose build I can run
- docker push registry.gitlab.com/xxx/php:$CI_COMMIT_SHA
- docker push registry.gitlab.com/xxx/db:$CI_COMMIT_SHA
- docker push registry.gitlab.com/xxx/nginx:$CI_COMMIT_SHA
and deploy to a registry. When pulling and docker-compose up, the code is there and can be run.
Now, there are other initialization-related things to be done (but only after all services are running).
So I have to execute
docker-compose up
docker exec xxx_php_1 composer install
After that I would like to store my changes to the image. However the changes are applied to the volume (libs installed in vendor). But the command docker tag ... does not affect the volume, I also have no idea how to push the volume to the registry.