I am trying to create a docker container with a volume called 'example', which has some files inside it, but I can't access it during build. Here are the files I am using:
# docker-compose.yml
version: "3"
services:
example:
build: .
volumes:
- "./example:/var/example"
stdin_open: true
tty: true
And:
# Dockerfile
FROM ubuntu
RUN ls /var/example
CMD ["/bin/bash"]
When I run:
sudo docker-compose up
It gives me an error:
ls: cannot access /var/example: No such file or directory
But when I delete the RUN command from the Dockerfile and run sudo docker-compose up again, and then run:
docker exec -it c949eef14fcd /bin/bash # c949eef14fcd is the id of the created container
ls /var/example
... in another terminal window, there is no error, and I can see all the files of the example directory. Why?
Sorry, I have just found out that volumes are not accessible during build. They are accessible during run of the container, which is said here in the point 9. But when I changed my Dockerfile to this:
# Dockerfile
FROM ubuntu:14.04
CMD ["ls", "/var/example"]
... it worked perfectly well and printed out all the files inside the example folder.
Related
I have a docker-compose file with a service called 'app'. When I try to run my docker file I don't see the service with docker ps but I do with docker ps -a.
I looked at the logs:
docker logs my_app_1
python: can't open file '//apps/index.py': [Errno 2] No such file or directory
In order to debug I wanted to be able to see the home directory and the files and dirs contained there when the app attempts to run.
Is there a command I can add to docker-compose that would show me the pwd and ls -l of the container when it attempts to run index.py?
My Dockerfile:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "apps/index.py"]
My docker-compose.yaml:
version: '3.1'
services:
app:
build:
context: ./app
dockerfile: ./Dockerfile
depends_on:
- db
ports:
- 8050:8050
My directory structure:
my_app:
* docker-compose.yaml
* app
* Dockerfile
* apps
* index.py
You can add a RUN statement in the application Dockerfile to run these commands.
Example:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
# Run your commands
RUN pwd && ls -l
CMD ["python", "apps/index.py"]
Then you chan check the logs of the build process and view the results.
I hope this answer helps you.
If you're just trying to debug an image you've already built, you can docker-compose run an alternate command:
docker-compose run apps \
ls -l ./apps
You don't need to modify anything in your Dockerfile to be able to do this (assuming it uses CMD correctly; see below).
If you need to do more intensive debugging, you can docker-compose run apps sh (or, if your image has it, bash) to get an interactive shell. The container will include any mounted volumes and be on the same Docker network as the named container, but won't have published ports.
Note that the command here replaces the CMD in the Dockerfile. If your image uses ENTRYPOINT for its main command, or if it has a complete command split between ENTRYPOINT and CMD (especially, if you have ENTRYPOINT ["python"]), these need to be combined into a single CMD for this to work. If your ENTRYPOINT is a wrapper script that does some first-time setup and then runs the CMD, this approach will work fine; the debugging ls or sh will run after the first-time setup happens.
I have a very simple project:
Dockerfile:
from node:lts
VOLUME /scripts
WORKDIR /scripts
RUN bash -c 'ls /'
RUN bash -c 'ls /scripts'
RUN script.sh
docker-compose.yml:
version: '3.7'
services:
service:
build: .
volumes:
- .:/scripts
Then I run docker-compose build but it fails with /bin/sh: 1: script.sh: not found
From the ls /scripts I can see that Docker isn't binding my script to the container. I have Docker 19.03.8. Do you know what I am doing wrong?
When you run a Docker Compose file, the build: block is run first, and it ignores all of the options outside that block. A Dockerfile never has mounted volumes, it can never make network calls to other Compose containers, and it won't see environment: variables that are set elsewhere.
That means you must explicitly COPY code into your image before you can RUN it.
FROM node:ls
WORKDIR /scripts
COPY script.sh .
RUN ./script.sh
I'm relatively new to Docker. I have a docker-compose.yml file that creates a volume. In one of my Dockerfiles I check to see the volume is created by listing the volume's contents. I get an error saying the volume doesn't exist. When does a volume actually become available when using docker compose?
Here's my docker-compse.yml:
version: "3.7"
services:
app-api:
image: api-dev
container_name: api
build:
context: .
dockerfile: ./app-api/Dockerfile.dev
ports:
- "5000:5000"
volumes:
- ../library:/app/library
environment:
ASPNETCORE_ENVIRONMENT: Development
I also need to have the volume available when creating my container because I use it in my dotnet restore command.
Here my Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS api-env
#list volume contents
RUN ls -al /app/library
WORKDIR /app/app-api
COPY ./app-api/*.csproj .
#need to have volume created before this command
RUN dotnet restore --source https://api.nuget.org/v3/index.json --source /app/library
#copies all files into current directory
COPY ./app-api/. .
RUN dotnet run Api.csproj
EXPOSE 5000
RUN echo "'dotnet running'"
I thought by adding -volumes: .... to docker-compose.yml it automatically creates the volume. Do I still need to add a create volume command in my Dockerfile?
TL;DR:
The commands you give in RUN are executed before mounting volumes.
The CMD will be executed after mounting the volumes.
Longer answer
The Dockerfile is used when building an image of the container. The image will then be used in a docker-compose.yml file to start up a container, to which a volume will be connected. The RUN command you are executing is executed when the image is built, so it will not have access to the volume.
You would normally issue a set of RUN commands, which would prepare the container image. Finally, you would define a CMD command, which would tell what program should be executed when a container starts, based on this image.
I recently tried to clone our production code in local setup which means this code is running in production.
The docker file looks like
FROM jboss/keycloak
COPY km.json /opt/jboss
COPY entrypoint.sh /opt/jboss
USER root
RUN chown jboss /opt/jboss/entrypoint.sh && chmod +x /opt/jboss/entrypoint.sh
USER 1000
ENTRYPOINT ["/opt/jboss/entrypoint.sh"]
CMD [""]
I am successfully able to create docker image but when I try to run it I get error
Caused by: java.io.FileNotFoundException: km.json (No such file or directory)
Repo structure
km/keycloak-images/km.json
km/keycloak-images/DockerFile
km/keycloak-images/entrypoint.sh
Docker compose file structure
/km/docker-compose.yml
/km/docker-compose.dev.yml
The docker-compose.dev.yml looks like
version: '3'
# The only service we expose in local dev is the keycloak server
# running an h2 database.
services:
keycloak:
build: keycloak-image
image: dt-keycloak
environment:
DB_VENDOR: h2
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: password
KEYCLOAK_HOSTNAME: localhost
ports:
- 8080:8080
I run the command from /km
docker-compose -f docker-compose.dev.yml up --build
Basically not able to find the file inside docker container to check.
$docker run --rm -it <containerName> /bin/bash #this command is used to run the docker and get inside the container.
cd /opt/jboss #check km.json file is there or not
Edited: Basically the path for the source in COPY(km.json) is incorrect. Try using absolute path the make it relative.
FROM jboss/keycloak
COPY ./km.json /opt/jboss # changed this line
COPY entrypoint.sh /opt/jboss
USER root
RUN chown jboss /opt/jboss/entrypoint.sh && chmod +x /opt/jboss/entrypoint.sh
USER 1000
ENTRYPOINT ["/opt/jboss/entrypoint.sh"]
CMD [""]
Your copy operation is wrong
if you run from
/km
you probably need to change COPY to
COPY keycloak-images/km.json /opt/jboss
if you run on Mac, try to use ADD instead of COPY, since mac has many issues with the copy
Try with this compose file:
version: '3'
services:
keycloak:
build:
context: ./keycloak-images
image: dt-keycloak
environment:
- DB_VENDOR: h2
- KEYCLOAK_USER: admin
- KEYCLOAK_PASSWORD: password
- KEYCLOAK_HOSTNAME: localhost
ports:
- 8080:8080
You have to specify the docker build context so that the files you need to copy are passed to the daemon.
Note that you need to adapt this context path when you do not execute docke-compose from km directory. This is because on your dockerfile you have specified
COPY km.json /opt/jboss
COPY entrypoint.sh /opt/jboss
Saying that the build context sent to docker daemon should be a directory containing these files.
I did docker-compose down/docker container rm and noticed that I lost all my data created in the container. Yes, I forgot to mount my local directory as a volume in the first place. ðŸ˜
To prevent this, on startup, I want to warn users that "the data will be non-persistent", if the local volume's not mounted.
Is there a way to detect from inside container whether a file or a directory is a mounted one via Docker?
I googled it but couldn't find a good way. And my current workaround is:
FROM alpine:latest
RUN \
mkdir /data && \
touch /data/.replaceme
...
ENTRYPOINT /detect-mount-and-start.sh
detect-mount-and-start.sh checks if /data/.replaceme exists. If so, it warns to mount a local volume and exits.
Are there a better way to detect it?
Note (2019/09/12): This container is not only used via docker-compose up but docker run --rm too. And the directory name in local are not specified. Meaning, it can be -v $(pwd)/mydata:/data or something like -v $(pwd)/data_local:/data, etc.
Note (2019/09/15): The situation is: I launched a container of Markdown Editor and created something like 100 of .md files. Those files were saved on /data on the root of the container. I should have mounted the volume like -v $(pwd)/data:/data before everything. But I didn't ... and noticed it after removing the container. My bad I know.
I don't know if I understand your question, but when you use docker-compose down, depending on how you create your docker-compose.yml, it will destroy your data, see:
Will delete you data when you execute down:
version: '2'
  mysqldb:
    image: mysql: 5.7
Don't will delete when you execute down:
version: '2'
  mysqldb:
    image: mysql: 5.7
    volumes:
      - ./data:/var/lib/mysql
Don't will delete when you execute down:
version: '2'
  mysqldb:
    image: mysql: 5.7
    volumes:
      - data-volume:/var/lib/mysql
volumes:
  data-volume:
    external: true
PS: I do not have a rating to comment on your question, so I am answering.
The way you are doing may also work, but I was working with one of my client on a project in development environment, It was nodejs based application and they need to make sure the server.js exist before starting the container and server.js was expected from mount location so I came up with this approach, As I did not find a way to sense shared docker volume inside container.
Dockerfile
FROM alpine
run mkdir -p /myapp
copy . /myapp
copy entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh
#!/bin/sh
APP_PATH="/myapp"
files=$(ls /myapp/*.js)
echo "Files in Docker mount location: $files"
if [ -f "$APP_PATH/server.js" ] && [ -f "$APP_PATH/index.js" ]; then
echo "Starting container with host mount files"
echo "Starting server.js"
cd $APP_PATH;
node server.js
else
>&2 echo "Error: Pls mount the host location on /myapp path of the container i.e -v host_node_project:/myapp. Current files $(ls $APP_PATH)"
break
fi
build and run
docker build -t myapp .
docker run -it --rm --name myapp myapp
docker-compose stop - doesn't destroy your containers
Then you can use:
docker-compose start