Docker-compose volume doesn't work as expected - docker

I'm trying to mount a volume via docker-composer but after running docker-compose up the directory is empty.
Dockerfile
FROM alpine:3.8
COPY test.txt ./app/
docker-compose.yml
version: "3.7"
services:
test:
image: myrep/image:latest
volumes:
- "./app/:/app/"
My procedure:
Build docker image on client (docker build .)
Push docker image to my registry (docker tag xxx myrep/image && docker push myrep/image)
On the server I pull the image (docker pull myrep/image)
Run docker-compose up (docker-compose up)
Then when I look into the app folder there is no test.txt file
Any idea what I'm doing wrong?

You copied the file into the image, but when you start the container you overwrite it with your directory when mounting it.
If you want the file to be there don’t mount the volume.
You can verify this by running the image without a volume and executing:
docker-compose exec test ls -l /app

May be you should try to add ./ before test.txt as it is not copying the file to the root directory
Hope it will work for you
FROM alpine:3.8
COPY ./test.txt ./app/

Related

Folder created in Dockerfile not visible after running container

I am using Docker and have a docker-compose.yml and a Dockerfile. In my Dockerfile, I create a folder. When I build the container, I can see that the folder is created, but when I run the container, I can see all the files, but the folder that I created during the container build is not visible.
Both of these files are in the same location.
Here is docker-compose
version: '3'
services:
app:
build: .
volumes:
- ./:/app
command: tail -f /dev/null #command to leave the container on
Here is my Dockerfile
FROM alpine
WORKDIR /app
COPY . /app
RUN mkdir "test"
RUN ls
To build the container I use the command: docker-compose build --progress=plain --no-cache. Command RUN ls from Dockerfile prints me that there are 3 files: Dockerfile, docker-compose.yml and created in Dockerfile test directory.
When my container is running and i'm entering the docker to check which files are there i haven't directory 'test'.
I probably have 'volumes' mounted incorrectly. When I delete them, after entering the container, I see the 'test' folder. Unfortunately,
I want my files in the container and on the host to be sync.
This is a simple example. I have the same thing creating dependencies in nodejs. I think that the question written in this way will be more helpful to others.
When you mount the same volume in docker-compose it will mount that folder to the running image, and test folder will be overwritten by mounted folder.
This may work for you (to create folder from docker-compose file), but im not really sure in your use case:
version: '3'
services:
app:
build: .
volumes:
- ./:/app
command: mkdir /app/test && tail -f /dev/null
Based on your comment, below is an example on how i would use Dockerfile to build node packages and save node_modules back to host:
Dockerfile
FROM node:latest
COPY . /app
RUN npm install
Sh
#!/bin/bash
docker build -t my-image .
container_id=$(docker run -d my-image)
docker cp $container_id:/app/node_modules .
docker stop $container_id
docker rm $container_id
Or more simple way, on how I use to do it, is just to run the docker image and ssh into it:
docker run --rm -it -p 80:80 -v $(pwd):/home/app node:14.19-alpine
ssh into running container, perform npm commands then exit;

Why is docker-compose still using an old image when using a remote `--context`

I have a node app that I am trying to deploy to my server using a remote context. However, files are not being copied to the built image on the server. I am running:
touch testing.txt # Make changes to the project
docker-compose --context server up --build -d # Deploy new version
docker --context server exec MY_CONTAINER pwd # /user/src/app
docker --context server exec MY_CONTAINER ls # testing.txt not there
However, it updates perfectly fine when running locally
docker exec MY_CONTAINER ls # testing.txt not there
touch testing.txt
docker-compose up --build -d # Deploy new version
docker exec MY_CONTAINER ls # testing.txt exists
I've even tried using force-recreate (docker-compose --context server up --force-recreate --build -d). The image is apparently recreated (docker ps shows a recent creation time), but the file is still not there.
The only thing that works is to delete the container and the image with docker rm and docker rmi and then rerun the first set of commands.
What's even more strange is that the up command after making changes says it didn't use the cached image layer:
# Running after a changed/added file
=> [5/6] COPY . ./
=> [6/6] RUN yarn build
# Running after nothing changed
=> CACHED [5/6] COPY . ./
=> CACHED [6/6] RUN yarn build
What am I doing wrong?
Here's my files
### Dockerfile
FROM node:14
WORKDIR /user/src/app
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
RUN yarn build
CMD ["yarn", "start"]
### docker-compose.yaml
version: "3.9"
services:
web:
build: .
ports:
- "1234:4321"
restart: always
### .dockerignore
.git
node_modules/
I personally solved the problem by adding an image tag
services:
web:
image: image-name:v1.0
Every time the name is changed, docker compose up --build recreates the image
Step 1) Increase the version of the tag
Step 2) Run the following commands
$ docker-compose --context CONTEXT -f docker-compose.yaml build
$ docker-compose --context CONTEXT -f docker-compose.yaml stop
$ docker-compose --context CONTEXT -f docker-compose.yaml rm -f
$ docker-compose --context CONTEXT -f docker-compose.yaml up -d --build\

Is it possible to create a docker image that just contains non executable files copied from host?

Is it possible to create an image of the contents of a folder on host. And later extract the content from image to the machine?
If so how?
Here is my failed attempt:
Dockerfile
WORKDIR '/data'
COPY ./hostfolder .
Command executed:
docker build -t mydata .
Folder structure:
Error:
Sending build context to Docker daemon 3.584kB
Error response from daemon: No build stage in current context
Yes, you can use a docker image as a place to store and then extract files.
First, you are missing a FROM directive in your Dockerfile. This is the reason for your error:
FROM alpine
WORKDIR '/data'
COPY . .
Then, to build the image:
$ docker build -t temp .
Then, to extract files, start the container:
$ docker run --detach --name data temp
and copy from the container to the host:
$ docker cp data:/data ./result

Docker image running incorrect command

What would cause a Docker image to not run the command specified in its docker-compose.yaml file?
I have a Dockerfile like:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /code
WORKDIR /code
COPY ./pip-requirements.txt pip-requirements.txt
COPY ./code /code/
RUN pip install --trusted-host pypi.python.org -r pip-requirements.txt
And a docker-compose.yaml file like:
version: '3'
services:
worker:
container_name: myworker
image: registry.gitlab.com/mygitlabuser/mygitlabproject:latest
network_mode: host
build:
context: .
dockerfile: Dockerfile
command: ./myscript.py --no-wait --traceback
If I build and run this locally with:
docker-compose -f docker-compose.yaml up
The script runs for a few minutes and I get the expected output. Running docker ps -a shows a container called "myworker" was created, as expected.
I now want to upload this image to a repo and deploy it to a production environment by downloading and running it on a remote server.
I re-build the image with:
docker-compose -f docker-compose.yaml build
and then upload it with:
docker login registry.gitlab.com
docker push registry.gitlab.com/myuser/myproject:latest
This succeeds and I confirm the new image exists in my gitlab image repository.
I then login to the production server and download the image with:
docker login registry.gitlab.com
docker pull registry.gitlab.com/myuser/myproject:latest
Again, this succeeds with docker reporting:
Status: Downloaded newer image for registry.gitlab.com/myuser/myproject:latest
Running docker images and docker ps -a shows no existing images or containers.
However, this is where it gets weird. If I then try to run this image with:
docker run registry.gitlab.com/myuser/myproject:latest
nothing seems to happen. Running docker ps -a shows a single container with the command "python2" and the name "gracious_snyder" was created, which don't match my image. It also says the container exited immediately after launch. Running docker logs gracious_snyder shows nothing.
What's going on here? Why isn't my image running the correct command? It's almost like it's ignoring all the parameters in my docker-compose.yaml file and is reverting to defaults in the base python2.7 image, but I don't know why this would be because I built the image using docker-compose and it ran fine locally.
I'm running Docker version 18.09.6, build 481bc77 on both local and remote hosts and docker-compose version 1.11.1, build 7c5d5e4 on my localhost.
Without a command (CMD) defined in your Dockerfile, you get the upstream value from the FROM image. The compose file has some settings to build the image, but most of the values are defining how to run the image. When you run the image directly, without the compose file (docker vs docker-compose), you do not get the runtime settings defined in the compose file, only the Dockerfile settings baked into the image.
The fix is to either use your compose file, or define the CMD inside the Dockerfile like:
CMD ./myscript.py --no-wait --traceback

How to copy files to a Docker volume and use that volume with docker-compose

There is a webservice running in a Docker container.
This webservice relies on big json files to boot.
I create a Docker volume to store the json files with docker volume create my-api-files.
Here is the docker-compose file for the webservice:
version: '3'
services:
my-api:
image: node:alpine
expose:
- ${NODE_PORT}
volumes:
- ./:/api
- my-api-files:/files
working_dir: /api
command: npm run start
volumes:
my-api-files:
external: true
Now, how can I copy the json files to the my-api-files docker volume before to start the the webservice with docker-compose up?
You could run a temporary container with that volume and a bind mount to your host files and run a copy from there:
docker run --rm -it -v my-api-files:/temporary -v $PWD/jsonFileLocation:/big-data alpine cp /big-data/*.json /temporary
docker run --rm -it -v my-api-files:/test alpine ls /test
You should see your JSON files in there.
EDIT: Of course, replace $PWD/jsonFileLocation with your JSON file location and your operating system's syntax.

Resources