For development purposes I'm using docker compose in a VirtualBox running in OSX. To start the magic I run something like this (dkc=docker-compose)
$ dkc -f base.yml -f development.yml up -d --build
My docker compose files would be something like this:
base.yml
version: '2'
services:
padeltotal-app:
build:
context: ./padeltotal
dockerfile: Dockerfile.app
container_name: 'padeltotal-app'
links:
- padeltotal-mysql:db
ports:
- "9000:9000"
padeltotal-mysql:
build:
context: ./padeltotal
dockerfile: Dockerfile.db
container_name: 'padeltotal-mysql'
ports:
- "3306:3306"
nginx-lt:
extends:
file: common.yml
service: nginx
volumes_from:
- padeltotal-app
development:
version: '2'
services:
padeltotal-app:
volumes:
- ./padeltotal/code/src:/var/www/padeltotal/src
It's a PHP+MySql application
While I'm developing I'd like to have the volume with the PHP code I've mounted updated reflecting the changes from my ./padeltotal folder.
The default behaviour of the mounted volumen should be to reflect the HOST folder. However, in OSX is not working that way
I repeat, it's for development purposes, for production mode I don't even use docker compose
Is there a way to re-mount the volume?
What would be a different approach?
Related
I have the following two docker compose files:
docker-compose.yml :
version: '2.3'
services:
# test11 service
test11:
build: test11/.
image: "test11"
and
docker-compose.yml (file inside the folder named test11 that contains Dockerfile and the following docker-compose ):
version: '2.3'
networks:
citrixhoneypot_local:
services:
# CitrixHoneypot service
test11:
build: .
container_name: test11
restart: always
networks:
- citrixhoneypot_local
ports:
- "443:443"
image: "test11:2006"
# read_only: true
volumes:
- test11:/opt/test11/logs
volumes:
test11:
driver:local
when i run docker-compose up --build for the first file, everything seems ok and the container build and i can run exec -it sh on it and get access.
but the problem is that the volume isn't made in the path
/var/lib/docker/volumes and i can't find it there.
also when i write in /opt/test11/logs from inside the docker container no file is made under /var/lib/docker/volumes .
I tried this with bind path too. same problem with that too.
I am using docker-compose and here is my docker-compose.yaml file:
version: "3.7"
services:
node:
container_name: my-app
image: my-app
build:
context: ./my-app-directoty
dockerfile: Dockerfile
command: npm run dev
environment:
MONGO_URL: my-database
port: 3000
volumes:
- ./my-app-directory/src:/app/src
- ./my-app-directory/node_modules:/app/node_modules
ports:
- "3000:3000"
networks:
- my-app-network
depends_on:
- my-database
my-database:
container_name: my-database
image: mongo
ports:
- "27017:27017"
networks:
- my-app-network
networks:
my-app-network:
driver: bridge
I expect to find a clear and newly created database each time I run the following command:
docker-compose build
docker-compose up
But this is not the case. When I bring the containers up with docker-compose up, my database has the exact state of the last time I shut it down with docker-compose down command. And since I have not specified a volume prop in my-database object, is this normal behaviour? Does this mean that no other action to persisting database state is required? And can I use this in production if I ever choose to use docker-compose?
The mongo image define the following volumes:
/data/configdb
/data/db
So docker-volume will create and use a unamed volume for data/db.
If you want to have a new one, use:
docker-compose down -v
docker-compose up -d --build
Or use a mount point mounted on the volume location like:
volumes:
- ./db:/data/db:rw
And drop your local db directories when you want to start over.
Hello I have multiple projects that have there own dockerfiles and docker-compose.yml files. I am not too familiar on how I would setup the networking between these projects. So they could share the same databases and the project would be able to talk to on another. Does anyone have suggests?
Right now, In one of the projects I am just pulling in all the dockerfile into a docker-compose.yml and setting-up all the services I need from all the other projects in this yml file. I do not think this is ideal and there is a high level a coupling between the services.
version: "3"
services:
db:
image: mysql/mysql-server
ports:
- 3306:3306
mongo:
image: mongo
restart: always
rails_app:
build:
context: ${RAILS_APP_PATH}
dockerfile: Dockerfile
volumes:
- ${RAILS_APP_PATH}:/application
ports:
- 4000:4000
depends_on:
- db
- mongo
links:
- db
- mongo
frontend:
build:
context: ${FRONTEND_PATH}
ports:
- ${EXPOSED_PORT}:${EXPOSED_PORT}
depends_on:
- go_services
links:
- go_services
go_services:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
depends_on:
- db
- mongo
- rails_app
links:
- db
- mongo
- rails_app
The trick is to use an External Docker Network.
Set up the network and the Containers can talk to each other by their Service Names.
Setup the the network on the Host
docker network create my-net
First compose file
version: '3.9'
services:
mymongo:
image: mongo:latest
restart: unless-stopped
container_name: mongo
environment:
MONGO_INITDB_DATABASE: mymongo
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
volumes:
- ./database:/data/db
ports:
- "27017:27017"
networks:
default:
external: true
name: my-net
Second compose file
version: '3.9'
services:
ui:
build:
context: ./build
dockerfile: Dockerfile_ui
image: ui
restart: "no"
container_name: ui
ports:
- "8005:3000"
command: ["npm", "start"]
networks:
default:
external: true
name: my-net
You can do this without any special Compose setup, if:
each project is self-contained (they do not share databases)
the service locations are configurable via environment variables
you don't mind communicating via the host
If you're thinking about scaling up this project at all, this approach can look attractive. It will work even if you're running each Compose file on a different host, and it translates well into clustered environments like Kubernetes.
Go ahead and break up your Compose file into several independent ones:
# rails/docker-compose.yml
version: '3.8'
services:
db:
image: mysql/mysql-server
app:
build: .
ports: ['4000:4000']
depends_on: [db]
# go/docker-compose.yml
services:
mongo:
image: mongo
service:
build: .
ports: ['8080:8080']
depends_on: [mongo]
environment:
- RAILS_APP_URL
The very last line here passes the RAILS_APP_URL environment variable from the host environment into the container.
You can start the Rails application independently:
docker-compose -f ./rails/docker-compose.yml up -d
You need to find some hostname where the container can call back to the host. On MacOS and Windows hosts, Docker provides a special hostname host.docker.internal for this. You can then connect the client container to the published port of its server:
export RAILS_APP_URL=http://host.docker.internal:4000
docker-compose -f ./go/docker-compose.yml up
If you're doing development, you can run the service you're working on locally, and its dependencies in containers, and point the environment variable at the container
go build -o ./server ./cmd/server
export RAILS_APP_URL=http://localhost:4000
./server
If you want to run this setup on multiple hosts but without using a dedicated cluster manager like Docker Swarm or Kubernetes, set the environment variable to point at the DNS name of the host running the service. If you did want to translate this to Kubernetes, a Helm "chart" would be analogous, containing the Deployment, Service, etc. and dependencies for a single component, and you could configure the other service's URL through Helm values.
Docker doesn't use the latest code after running git checkout <non_master_branch>, while I can see it in the vscode.
I am using the following docker-compose file:
version: '2'
volumes:
pgdata:
backend_app:
services:
nginx:
container_name: nginx-angular-dev
image: nginx-angular-dev
build:
context: ./frontend
dockerfile: /.docker/nginx.dockerfile
ports:
- "80:80"
- "443:443"
depends_on:
- web
web:
container_name: django-app-dev
image: django-app-dev
build:
context: ./backend
dockerfile: /django.dockerfile
command: ["./wait-for-postgres.sh", "db", "./django-entrypoint.sh"]
volumes:
- backend_app:/code
ports:
- "8000:8000"
depends_on:
- db
env_file: .env
environment:
FRONTEND_BASE_URL: http://192.168.99.100/
BACKEND_BASE_URL: http://192.168.99.100/api/
MODE_ENV: DOCKER_DEV
db:
container_name: django-db
image: postgres:10
env_file: .env
volumes:
- pgdata:/var/lib/postgresql/data
I have tried docker-compose build --no-cache, followed by docker-compose up --force-recreate but it didn't solve the problem.
What is the root of my problem?
Your volumes: are causing problems. Docker volumes aren't intended to hold code, and you should delete the volume declarations that mention backend_app:.
Your docker-compose.yml file says in part:
volumes:
backend_app:
services:
web:
volumes:
- backend_app:/code
backend_app is a named volume: it keeps data that must be persisted across container runs. If the volume doesn't exist yet the first time then data will be copied into it from the image, but after that, Docker considers it to contain critical user data that must not be updated.
If you keep code or libraries in a Docker volume, Docker will never update it, even if the underlying image changes. This is a common problem in JavaScript applications that mount an anonymous volume on their node_modules directory.
As a temporary workaround, if you docker-compose down -v, it will delete all of the volumes, including the one with your code in it, and the next time you start it will get recreated from the image.
The best solution is to simply not use a volume here at all. Delete the lines above from your docker-compose.yml file. Develop and test your application in a non-Docker environment, and when you're ready to do integration testing, run docker-compose up --build. Your code will live in the image, and an ordinary docker build will produce a new image with new code.
I have configured distributed version of cassandra using Docker-Compose.
Here is my docker-compose.yml file:
version: '3.0'
services:
cassandra-masters:
image: strapdata/elassandra
environment:
CASSANDRA_LISTEN_ADDRESS: tasks.cassandra-masters
cassandra-slaves1:
image: strapdata/elassandra
environment:
CASSANDRA_SEEDS: tasks.cassandra-masters
CASSANDRA_LISTEN_ADDRESS: tasks.cassandra-slaves1
depends_on:
- cassandra-masters
After running the docker-compose file using sudo docker stack deploy elassandra --compose-file docker-compose.yml, everything works well and I can see them using docker service ls command.
Problem: What I want is that I don't know how to use volume in distributed of containers. Is it like the normal configuration of docker-compose that found in Docker's site? or it is different?
Solution I have tried the named volumes like the following, There isn't any difference between this approach (distributed) and normal approach. The only thing that should be considered is that the volume should be shared:
version: '3.0'
services:
cassandra-masters:
image: strapdata/elassandra
environment:
CASSANDRA_LISTEN_ADDRESS: tasks.cassandra-masters
volumes:
- app-volume:/var/lib/cassandra
cassandra-slaves1:
image: strapdata/elassandra
environment:
CASSANDRA_SEEDS: tasks.cassandra-masters
CASSANDRA_LISTEN_ADDRESS: tasks.cassandra-slaves1
depends_on:
- cassandra-masters
volumes:
- app-volume:/var/lib/cassandra
volumes:
app-volume: