I have 4 services to run through docker compose:
version: "3"
services:
billingmock:
build:
context: ./mock/soap/billing
dockerfile: ./Dockerfile
ports:
- 8096:8096
salcusmock:
build:
context: ./mock/soap/salcus
dockerfile: ./Dockerfile
ports:
- 8088:8088
ngocsrestmock:
build:
context: ./mock/rest/ngocs-rest
dockerfile: ./Dockerfile
volumes:
- /test/mock-data/Ngocs-Rest-Mock:/usr/src/ngocs-rest-mock/
ports:
- 8091:8091
kafka:
image: <some-repo>.com/mce/kafka_local_r20-11
ports:
- 9092:9092
- 8080:8080
- 8081:8081
- 8082:8082
but ngocs container is not running, all other container s are running when i check the log of that container i get : Exited (1) 36 seconds ago
Error: Unable to access jarfile mocks-mock-ngocs-rest-executable-1.0.0-SNAPSHOT.jar
dockerfile for that service is :
FROM openjdk:8
COPY /executable/target/mocks-mock-ngocs-rest-executable-1.0.0-SNAPSHOT.jar /usr/src/ngocs-rest-mock/
WORKDIR /usr/src/ngocs-rest-mock/
ENTRYPOINT ["java","-jar","mocks-mock-ngocs-rest-executable-1.0.0-SNAPSHOT.jar"]
i have to start the container manually and then it runs but volume is not mounted. What seems to be the issue ??? Also if i remove the volume section in docker compose then it runs.
If you have volumes: that binds a host directory to a container directory, at container startup time, the contents of that host directory always completely hide anything that was in the underlying image. In your case, you're mounting a directory over the directory that contains the jar file, so the actual application gets hidden.
You should restructure your application to keep the data somewhere separate from the application code. Using simple top-level directories like /app and /data is common enough, or you can make the data directory a subdirectory of your application directory.
Once you've done this, you can change the volumes: mount to a different directory:
# for example, a "data" subdirectory of the application directory
volumes:
- /test/mock-data/Ngocs-Rest-Mock:/usr/src/ngocs-rest-mock/data
Related
I have a docker-compose.yml
version: '3.3'
services:
ssh:
environment:
- TZ=Etc/UTC
- DEBIAN_FRONTEND=noninterative
build:
context: './'
dockerfile: Dockerfile
ports:
- '172.17.0.2:22:22'
- '443:443'
- '8025:8025'
volumes:
- srv:/srv:rw
restart: always
volumes:
srv:
After I run docker-compose up --build I can ssh to the docker vm and there are files in /srv. 'docker volume ls' shows 2 volumes, srv and dockersetupsrv. They are both in /var/lib/docker/volumes. They both contain _data directories and show creation time stamps that match the docker image creation times but are otherwise empty. Neither one contains any of the files that are in the docker container's /srv directory. How can I share the docker /srv directory with the host?
you should point out more specific for the mapping directory,
for example:
/srv:/usr/srv:rw
after that, when you add content inside your host machine /srv,it is automatically map into /usr/srv
--> make sure that directory exist
you can have a check in this link : https://docs.docker.com/storage/volumes/
It seems to be a misunderstood point from me about volumes. I have a docker-compose file with two services : jobs which is a Flask api built from a Dockerfile (see below), and mongo which is from official MongoDb image.
I have two volumes : - .:/code is linked from my host working directory to /code folder in the container, and a named volume mongodata.
version: "3"
services:
jobs:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
environment:
FLASK_ENV: ${FLASK_ENV}
FLASK_APP: ${FLASK_APP}
depends_on:
- mongo
mongo:
image: "mongo:3.6.21-xenial"
restart: "always"
ports:
- "27017:27017"
volumes:
- mongodata:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
volumes:
mongodata:
Dockerfile for jobs service :
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=job-checker
ENV FLASK_ENV=development
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run", "--host=0.0.0.0"]
Every time I remove these container and re-run, everything is fine, I still have my data in mongodata volume. But when I check the volume list I can see that a new volume is created from - .:/code with a long volume name, for example :
$ docker volume ls
DRIVER VOLUME NAME
local 55c08cd008a1ed1af8345cef01247cbbb29a0fca9385f78859607c2a751a0053
local abe9fd0c415ccf7bf8c77346f31c146e0c1feeac58b3e0e242488a155f6a3927
local job-checker_mongodata
Here I ran docker-compose up, then I removed containers, then ran up again, so I have two volumes from my working folder.
Is this normal that every up create a new volume instead of using the previous one ?
Thanks
Hidden at the end of the Docker Hub mongo image documentation is a note:
This image also defines a volume for /data/configdb...
The image's Dockerfile in turn contains the line
VOLUME /data/db /data/configdb
When you start the container, you mount your own volume over /data/db, but you don't mount anything on the second path. This causes Docker to create an anonymous volume there, which is the volume you're seeing with only a long hex ID.
It should be safe to remove the extra volumes, especially if you're sure they're not attached to a container and they don't have interesting content.
This behavior has nothing to do with the bind mount in the other container; bind mounts never show up in the docker volume ls listing at all.
I am using Docker which is running fine.
I can start a Docker image using docker-compose.
docker-compose rm nodejs; docker-compose rm db; docker-compose up --build
I attached a shell to the Docker container using
docker exec -it nodejs_nodejs_1 bash
I can view files inside the container
(inside container)
cat server.js
Now when I edit the server.js file inside the host, I would like the file inside the container to change without having to restart Docker.
I have tried to add volumes to the docker-compose.yml file or to the Dockerfile, but somehow I cannot get it to work.
(Dockerfile, not working)
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
VOLUMES ["/usr/src/app"]
EXPOSE 8080
CMD [ "npm", "run", "watch" ]
or
(docker-compose.yml, not working)
version: "3.3"
services:
nodejs:
build: ./nodejs-server
ports:
- "8001:8080"
links:
- db:db
env_file:
- ./.env-example
volumes:
- src: /usr/src/app
db:
build: ./mysql-server
volumes:
- ./mysql-server/data:/docker-entrypoint-initdb.d #A folder /mysql-server/data with a .sql file needs to exist
env_file:
- ./.env-example
volumes:
src:
There is probably a simple guide somewhere, but I havn't found it yet.
If you want a copy of the files to be visible in the container, use a bind mount volume (aka host volume) instead of a named volume.
Assuming your docker-compose.yml file is in the root directory of the location that you want in /usr/src/app, then you can change your docker-compose.yml as follows:
version: "3.3"
services:
nodejs:
build: ./nodejs-server
ports:
- "8001:8080"
links:
- db:db
env_file:
- ./.env-example
volumes:
- .:/usr/src/app
db:
build: ./mysql-server
volumes:
- ./mysql-server/data:/docker-entrypoint-initdb.d #A folder /mysql-server/data with a .sql file needs to exist
env_file:
- ./.env-example
I have 2 docker containers. One running tomcat and the other running mysql. I want to copy a .sql file into the already existing "docker-entrypoint-initdb.d" folder of the mysql container.
I used the following command in my Dockerfile:
COPY test.sql /docker-entrypoint-initdb.d
After both containers are started I saw that the folder "docker-entrypoint-initdb.d" was created in my tomcat container and the test.sql was copied into it.
The file isn't copied where I need it to be. test.sql wasnt copied into the mysql container.
What can I do?
docker-compose.xml:
version: "2"
services:
db:
image: mysql
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD="true"
myapp:
build: ./myapp
ports:
- 8080:8080
- 3306:3306
Build your own image for the database container with a Dockerfile like this:
FROM mysql
COPY test.sql /docker-entrypoint-initdb.d
tomcat container is build via docker file, where as mysql container(db) is build via the docker image name "mysql".
You can mount the volume with current folder (host) to "/docker-entrypoint-initdb.d" inside the container.
new docker-compose.yml will look like this.
version: "2"
services:
db:
image: mysql
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD="true"
volumes:
- .:/docker-entrypoint-initdb.d
myapp:
build: ./myapp
ports:
- 8080:8080
- 3306:3306
You are building tomcat container, But using Mysql image there, Thats why the file copied to tomcat container.
When containers are up you can docker cp the file manually to the desired location.
If you want to have the database available to container at startup, I suggest you use a dummy container with mounted local filesystem. Then restore the database manually in that container. Then remove the container and modify dockerfile like this:
version: "2"
services:
db:
image: mysql
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD="true"
volumes:
- /my/own/datadir:/var/lib/mysql
myapp:
build: ./myapp
ports:
- 8080:8080
- 3306:3306
Another way would be creating your own image using following dockerfile:
FROM mysql
COPY test.sql /docker-entrypoint-initdb.d
I'm in Fedora 23 and i'm using docker-compose to build two containers: app and db.
I want to use that docker as my dev env, but have to execute docker-compose build and up every time i change the code isn't nice. So i was searching and tried the "volumes" option but my code doesn't get copied to docker.
When i run docker-build, a "RUN ls" command doesn't list the "app" folder or any files of it.
Obs.: in the root folder I have: docker-compose.yml, .gitignore, app (folder), db (folder)
ObsĀ¹.: If I remove the volumes and working_dir options and instead I use a "COPY . /app" command inside the app/Dockerfile it works and my app is running, but I want it to sync my code.
Anyone know how to make it work?
My docker-compose file is:
version: '2'
services:
app:
build: ./app
ports:
- "3000:3000"
depends_on:
- db
environment:
- DATABASE_HOST=db
- DATABASE_USER=myuser
- DATABASE_PASSWORD=mypass
- DATABASE_NAME=dbusuarios
- PORT=3000
volumes:
- ./app:/app
working_dir: /app
db:
build: ./db
environment:
- MYSQL_ROOT_PASSWORD=123
- MYSQL_DATABASE=dbusuarios
- MYSQL_USER=myuser
- MYSQL_PASSWORD=mypass
Here you can see my app container Dockerfile:
https://gist.github.com/jradesenv/d3b5c09f2fcf3a41f392d665e4ca0fb9
Heres the output of the RUN ls command inside Dockerfile:
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
A volume is mounted in a container. The Dockerfile is used to create the image, and that image is used to make the container. What that means is a RUN ls inside your Dockerfile will show the filesystem before the volume is mounted. If you need these files to be part of the image for your build to complete, they shouldn't be in the volume and you'll need to copy them with the COPY command as you've described. If you simply want evidence that these files are mounted inside your running container, run a
docker exec $container_name ls -l /
Where $container_name will be something like ${folder_name}_app_1, which you'll see in a docker ps.
Two things, have you tried version: '3' version two seems to be outdated. Also try putting the working_dir into the Dockerfile rather than the docker-compose. Maybe it's not supported in version 2?
This is a recent docker-compose I have used with volumes and workdirs in the respective Dockerfiles:
version: '3'
services:
frontend:
build:
context: ./frontend
dockerfile: Dockerfile.dev
ports:
- 3001:3001
volumes:
- ./frontend:/app
networks:
- frontend
backend:
build: .
ports:
- 3000:3000
volumes:
- .:/app
networks:
- frontend
- backend
depends_on:
- "mongo"
mongo:
image: mongo
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
networks:
- backend
networks:
frontend:
backend:
You can extend or override docker compose configuration. Please follow for more info: https://docs.docker.com/compose/extends/
I had this same issue in Windows!
volumes:
- ./src/:/var/www/html
In windows ./src/ this syntax might not work in regular command prompt, so use powershell instead and then run docker-compose up -d.
it should work if it's a mounting issue.