I created a docker-compose file to build image from dockerfile and then run container this is my code:
Dockerfile
FROM anapsix/alpine-java
VOLUME [ "/var/run/jars/" ]
ADD hello-world.jar /var/run/jars/
EXPOSE 8080
ENTRYPOINT [ "java" ]
CMD ["-?"]
docker-compose.yml
version: '3'
services:
hello-world-image:
build: .
image: hello-world-image
hello-world:
image: hello-world-image
container_name: hello-world
ports:
- "8080:8080"
volumes:
- ./logs_ACM:/root/logs_ACM
command: -jar /var/run/jars/hello-world.jar
restart: always
docker ps output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
103b0a3c30e3 hello-world-image "java -jar /var/run/…" 5 seconds ago Restarting (1) Less than a second ago hello-world
When i check running containers with "docker ps" the port column is empty therefore no port mapping was done even though i specified ports in my docker compose file.
What changes needed to be done on my docker-compose file to solve this issue ?
new version of dockerfile and docker-compose :
FROM anapsix/alpine-java
USER root
RUN mkdir -p /var/run/jars/
COPY spring-petclinic-2.4.2.jar /var/run/jars/
EXPOSE 8081
ENTRYPOINT [ "java" ]
CMD ["-?"]
version: '3' # '3' means '3.0'
services:
spring-petclinic:
build: .
# Only if you're planning to `docker-compose push`
# image: registry.example.com/name/hello-world-image:${TAG:-latest}
ports:
- "8081:8081"
volumes:
# A bind-mount directory to read out log files is a good use of
# `volumes:`. This does not require special setup in the Dockerfile.
- ./logs_ACM:/root/logs_ACM
command: -jar /var/run/jars/spring-petclinic-2.4.2.jar
mysql:
image: mysql:5.7
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_ALLOW_EMPTY_PASSWORD=true
- MYSQL_USER=petclinic
- MYSQL_PASSWORD=petclinic
- MYSQL_DATABASE=petclinic
volumes:
- "./conf.d:/etc/mysql/conf.d:ro"
I think your single biggest problem here is the VOLUME directive in the Dockerfile. The Dockerfile documentation for VOLUME notes:
Changing the volume from within the Dockerfile: If any build steps change the data within the volume after it has been declared, those changes will be discarded.
So when you declare a VOLUME for the directory containing the jar file, and then try to ADD content to it, it gets lost.
In most practical cases you don't need a VOLUME. You should be able to rewrite the Dockerfile to:
FROM anapsix/alpine-java
# Do not create a VOLUME.
# Generally prefer COPY to ADD. Will create the target directory if needed.
COPY hello-world.jar /var/run/jars/
EXPOSE 8080
# Don't set an ENTRYPOINT just naming an interpreter.
# Do make the default container command be to run the application.
CMD ["java", "-jar", "/var/run/jars/hello-world.jar"]
In the docker-compose.yml file, you don't need a separate "service" just to build the image, and you shouldn't typically need to override container_name: (provided by Compose) or command: (from the Dockerfile). This could be reduced to:
version: '3.8' # '3' means '3.0'
services:
hello-world:
build: .
# Only if you're planning to `docker-compose push`
# image: registry.example.com/name/hello-world-image:${TAG:-latest}
ports:
- "8080:8080"
volumes:
# A bind-mount directory to read out log files is a good use of
# `volumes:`. This does not require special setup in the Dockerfile.
- ./logs_ACM:/root/logs_ACM
# Don't enable auto-restart until you've debugged the start sequence
# restart: always
Related
I've install MySQL and PhpMyAdmin on docker
MySQL volume mount works perfectly fine,
But I also want container's /var/www/html/libraries, /var/www/html/themes folders to be saved/persisted to my host.
So that If I change any file and it stays like that..
This is my docker-compose.yml
version: '3.5'
services:
mysql:
container_name: mysql
image: mysql
restart: always
volumes:
- ./var/lib/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
phpmyadmin:
container_name: phpmyadmin
image: phpmyadmin/phpmyadmin:latest
restart: always
volumes:
- ./phpmyadmin/libraries:/var/www/html/libraries # Here's the problem
- ./phpmyadmin/themes:/var/www/html/themes # Here's the problem
environment:
PMA_HOST: mysql
The current problem is,
it does create the folders /phpmyadmin/libraries, /phpmyadmin/themes
But inside they're empty and the container's directories (/var/www/html/libraries, /var/www/html/themes) also becomes empty.
I'm very new to Docker, and currently I've no clue :(
Many Thanks in advance.
Your problem is that /var/www/html is populated at build time and volumes are mounted at run time which causes /var/www/html to be overwritten by what you have locally (i.e. nothing).
You need to extend the Dockerfile for PHPMyAdmin to delay populating those directories until after the volumes have been mounted. You'll need something like this setup:
Modify docker-compose.yml to the following:
...
phpmyadmin:
container_name: phpmyadmin
build:
# Use the Dockerfile located at ./build/phpmyadmin/Dockerfile to build this image
context: ./build/phpmyadmin
dockerfile: Dockerfile
restart: always
volumes:
- ./phpmyadmin/libraries:/var/www/html/libraries
- ./phpmyadmin/themes:/var/www/html/themes
environment:
PMA_HOST: mysql
Create a file at ./build/phpmyadmin/Dockerfile with this content:
FROM phpmyadmin/phpmyadmin:latest
# Move the directories you want into a temporary directory
RUN mv /var/www/html /tmp/
# Modify the start up of this image to use a custom script
COPY ./custom-entrypoint.sh /custom-entrypoint.sh
RUN chmod +x /custom-entrypoint.sh
ENTRYPOINT ["/custom-entrypoint.sh"]
CMD ["apache2-foreground"]
Create a custom entrypoint at ./build/phpmyadmin/custom-entrypoint.sh with this content:
#!/bin/sh
# Copy over the saved files
cp -r /tmp/html /var/www
# Kick off the original entrypoint
exec /docker-entrypoint.sh "$#"
Then you can build and start everything with docker-compose up --build.
Note: this will probably cause issues for you if you're trying to version control these directories - you'll probably need to modify custom-entrypoint.sh.
I'm using Docker
Docker version 19.03.8, build afacb8b
I have the following docker-compose.yml file ...
version: "3.2"
services:
sql-server-db:
build: ./
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "Password1!"
ACCEPT_EULA: "Y"
and here is the Docker file it uses to build ...
FROM microsoft/mssql-server-linux:latest
# Create work directory
RUN mkdir -p /usr/work
WORKDIR /usr/work
# Copy all scripts into working directory
COPY . /usr/work/
# Grant permissions for the import-data script to be executable
RUN chmod +x /usr/work/import-data.sh
EXPOSE 1433
CMD /bin/bash ./entrypoint.sh
On my local machine, I have some files in a "../../scripts/myproject/*.sql" directory (the ".." are relative to the directory where my docker-compose.yml file is stored). Is there a way I can run "docker-compose up" and have those files copied into a directory from which I can then copy them into the container's "/usr/work" directory?
There are 2 ways to solve this, with one being easier than the other, but both have use cases.
The easy way
You could mount the directory directly to the container through the docker-compose like this:
version: "3.2"
services:
sql-server-db:
build: ./
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "Password1!"
ACCEPT_EULA: "Y"
volumes:
- ../../scripts/myproject:/path/to/dir
Note the added volumes compared to the yaml in your question. This will mount the myproject directory to /path/to/dir within the container. What this will also mean is that if the sql-server-db container writes to any of the files in /path/to/dir, then the file in myproject on the host machine will also change, since the files are mounted.
The less easy way
You could copy the files directly during the build of the image. This is a little bit harder, since the build stage of docker doesn't allow the copying of parent directories unless you add some extra arguments. What needs to happen is that you set the context of the build stage to a different directory than the current directory. The context determines which files are sent to the build stage. This is the same directory as the directory the Dockerfile resides in by default.
To take this approach, you need the following in your docker-compose.yml:
version: "3.2"
services:
sql-server-db:
build:
context: ../..
dockerfile: path/to/Dockerfile # Here you should specify the path to your Dockerfile, this is a relative path from your context
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
ports:
- "1433:1433"
environment:
SA_PASSWORD: "Password1!"
ACCEPT_EULA: "Y"
So above the context is now ../.. so that you are able to copy files two directories above. You can then copy the myproject directory in your Dockerfile like this:
FROM microsoft/mssql-server-linux:latest
COPY ./scripts/myproject /myfiles
The advantage of this approach is that the files are copied instead of being mounted, so the docker container can write whatever it wants to these files, without affecting the host machine.
I am trying to start a docker container using a redis db that I have a persistent copy saved to a local machine.
I currently have a docker container loading redis with a volume using this docker-compose.yml but it misses my redis.conf (which contains the loadmodule command) is located in the volume with the rdb file
version: '3'
services:
redis:
image: redis
container_name: "redis"
ports:
- "6379:6379"
volumes:
- E:\redis_backup_conf:/data
This begins to load the RDB but crashes out because the data uses this time series module.
I can load a seperate docker container with a fresh redis db that has the time seriese module loaded using the following dockerfile. My issue is I can't figure out how to do both at the same time!
Is there someway of calling a dockerfile from a docker-compose.yml or declaring the volume in the dockerfile?
That, or should I be creating my own image that I can call in the docker-compose.yml?
Any help woule be appreciated, I'm honeslty just going round in circles I think.
dockerfile
# BUILD redisfab/redistimeseries:${VERSION}-${ARCH}-${OSNICK}
ARG REDIS_VER=6.0.1
# stretch|bionic|buster
ARG OSNICK=buster
# ARCH=x64|arm64v8|arm32v7
ARG ARCH=x64
#----------------------------------------------------------------------------------------------
FROM redisfab/redis:${REDIS_VER}-${ARCH}-${OSNICK} AS builder
ARG REDIS_VER
ADD ./ /build
WORKDIR /build
RUN ./deps/readies/bin/getpy2
RUN ./system-setup.py
RUN make fetch
RUN make build
#----------------------------------------------------------------------------------------------
FROM redisfab/redis:${REDIS_VER}-${ARCH}-${OSNICK}
ARG REDIS_VER
ENV LIBDIR /usr/lib/redis/modules
WORKDIR /data
RUN mkdir -p "$LIBDIR"
COPY --from=builder /build/bin/redistimeseries.so "$LIBDIR"
EXPOSE 6379
CMD ["redis-server", "--loadmodule", "/usr/lib/redis/modules/redistimeseries.so"]
EDIT:
ok.. slight improvement i can call a redis-timeseries image in the docker-compose.yml
services:
redis:
image: redislabs/redistimeseries
container_name: "redis"
ports:
- "6379:6379"
volumes:
- E:\redis_backup_conf:/data
This is a start however I still need to increase the maximum number of db's, I have been using the redis.conf to do this in the past.
You can just have docker-compose build your dockerfile directly. Assume your docker-compose file is in folder called myproject . Also assume your dockerfile is in a folder called myredis and that myredis is in the myproject folder. Then you can replace this line in your docker-compose file:
Image: redis
With:
Build: ./myredis
That will build and use your custom image
I'm unable to start Tomcat server from docker compose.
When I log into container using docker exec -it <container id> bash and see ps -eaf | grep "tomcat" it is showing empty. Tomcat server is not running.
docker-compose.yml file:
version: "3"
services:
meghcore:
build: ./Core
container_name: 'meghcore'
expose:
- '8080'
ports:
- '8080:8080'
volumes:
- meghcore:/opt/Tomcat1/webapps/
command: /bin/bash
tty: true
stdin_open: true
networks:
- meghnet
volumes:
meghcore:
networks:
meghnet:
driver: bridge
Dockerfile file:
FROM tomcat:8.5.35
WORKDIR /app
COPY . /app
RUN mv /app/*.war /opt/Tomcat1/webapps/
ENV PATH $PATH:/opt/Tomcat1/bin
WORKDIR /opt/Tomcat1/bin
EXPOSE 8080
CMD ["catalina.sh", "run"]
Since you specify an alternate command: in your docker-compose.yml file, that overrides the CMD in the Dockerfile. You don't need most of the options you specify there at all, and several of them (the alternate command:, the volumes: overwriting the actual application) interfere with the normal container operation.
A complete, functional docker-compose.yml would be
version: "3"
services:
meghcore:
build: ./Core
ports:
- '8080:8080'
None of the other options you list out are necessary. If there were other containers listed in the file, they could still communicate using their Docker Compose service names, without any special setup (another container in this same file could successfully call http://meghcore:8080).
What is happening is command specify in docker-compose.yml is overwriting the CMD provided in dockerfile.
kindly update command with the command available in dockerfile or remove command from docker-compose.yml
Problem is resolved by adding below commands in dockerfile and removed command from docker compose file.
ENV PATH $PATH:$JAVA_HOME/bin
ENV PATH $PATH:/opt/Tomcat1/bin
WORKDIR /opt/Tomcat1/bin
EXPOSE 8080
CMD ["catalina.sh", "run"]
I am using Docker which is running fine.
I can start a Docker image using docker-compose.
docker-compose rm nodejs; docker-compose rm db; docker-compose up --build
I attached a shell to the Docker container using
docker exec -it nodejs_nodejs_1 bash
I can view files inside the container
(inside container)
cat server.js
Now when I edit the server.js file inside the host, I would like the file inside the container to change without having to restart Docker.
I have tried to add volumes to the docker-compose.yml file or to the Dockerfile, but somehow I cannot get it to work.
(Dockerfile, not working)
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
VOLUMES ["/usr/src/app"]
EXPOSE 8080
CMD [ "npm", "run", "watch" ]
or
(docker-compose.yml, not working)
version: "3.3"
services:
nodejs:
build: ./nodejs-server
ports:
- "8001:8080"
links:
- db:db
env_file:
- ./.env-example
volumes:
- src: /usr/src/app
db:
build: ./mysql-server
volumes:
- ./mysql-server/data:/docker-entrypoint-initdb.d #A folder /mysql-server/data with a .sql file needs to exist
env_file:
- ./.env-example
volumes:
src:
There is probably a simple guide somewhere, but I havn't found it yet.
If you want a copy of the files to be visible in the container, use a bind mount volume (aka host volume) instead of a named volume.
Assuming your docker-compose.yml file is in the root directory of the location that you want in /usr/src/app, then you can change your docker-compose.yml as follows:
version: "3.3"
services:
nodejs:
build: ./nodejs-server
ports:
- "8001:8080"
links:
- db:db
env_file:
- ./.env-example
volumes:
- .:/usr/src/app
db:
build: ./mysql-server
volumes:
- ./mysql-server/data:/docker-entrypoint-initdb.d #A folder /mysql-server/data with a .sql file needs to exist
env_file:
- ./.env-example