Docker bind mount mode forced to read-only - docker

I'd like to build an application using docker, and to get the built files back to the host using a build mount, but I cannot get the docker image to be able to write to the mounted directory. Here is a minimal Dockerfile that reproduces the issue
FROM alpine:3.7
WORKDIR /app
RUN touch /app/build/test.txt
The command that I use to run this build is docker build --rm -v "$(pwd)/build:/app/build" . The error I get is shown below:
$ docker build --rm -v "$(pwd)/build:/app/build" .
Sending build context to Docker daemon 81.29 MB
Step 1/3 : FROM alpine:3.7
---> 34ea7509dcad
Step 2/3 : WORKDIR /app
---> b0c4ac704af7
Removing intermediate container 234ef41fd395
Step 3/3 : RUN touch /app/build/test.txt
---> Running in e095ed8b29d5
touch: /app/build/test.txt: Read-only file system
I am running on Fedora 27, with Docker v1.13.1. There is a docker group on my machine to allow running docker commands without sudo, as explained here
I have tried the following without success:
Calling the docker command with sudo
Disabling SELinux (I keep it disabled at the moment)
Adding the z/Z mount option to the volume, as explained here (-v "$(pwd)/build:/app/build:Z")
Adding the rw mount option (-v "$(pwd)/build:/app/build:rw")
Calling docker build with no build directory on the host

As pointed out in the comments, docker build does not support the -v option. What I was trying to do with a Dockerfile and docker build should done with a docker run command instead:
docker run -i -v $(pwd)/build:/app/build alpine:3.7 << EOF
touch /app/build/test.txt
... other commands ...
EOF

Related

How ro access docker volume files from the code on docker container

i have creted a docker volume with such command
docker run -ti --rm -v TestVolume1:/testvolume1 ubuntu
then i created a file there, called TestFile.txt and added text to it
Also i have a simple "Hello world" .net core app with Dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:6.0
COPY bin/Release/net6.0/publish/ ShareFileTestInstance1/
WORKDIR /ShareFileTestInstance1
ENTRYPOINT ["dotnet", "ShareFileTestInstance1.dll"]
I published it using
dotnet publish -c Release
then ran
docker build -t counter-image -f Dockerfile .
And finally executed
docker run -it --rm --name=counter-container counter-image -v TestVolume1:/testvolume1 ubuntu
to run my app with a docker volume
So what i want to achive to access a file which is in a volume("TestFile.txt" in my case) from a code in the container.
for example
Console.WriteLine(File.Exists("WHAT FILE PATH HAS TO BE HERE") ? "File exists." : "File does not exist.");
Is it also possible to combine all this stuff in a Dockerfile? I want to add one more container next and connect to the volume to save data there.
The parameters for docker run can be either for docker or for the program running in the docker container. Parameters for docker go before the image name and parameters for the program in the container go after the image name.
The volume mapping is a parameter for docker, so it should go before the image name. So instead of
docker run -it --rm --name=counter-container counter-image -v TestVolume1:/testvolume1 ubuntu
you should do
docker run -it --rm --name=counter-container -v TestVolume1:/testvolume1 counter-image
When you do that, your file should be accessible for your program at /testvolume1/TestFile.txt.
It's not possible to do the mapping in the Dockerfile as you ask. Mappings may vary from docker host to docker host, so they need to be specified at run-time.

How to copy SSH from JENKINS host into a DOCKER container?

I can't copy the file from the host into the container using the Dockerfile, because i'm simply not allowed to, as mentioned in Docker Documentation:
The path must be inside the context of the build; you cannot
COPY ../something /something, because the first step of a docker build
is to send the context directory (and subdirectories) to the docker
daemon.
I'm also unable to do so from inside jenkins job, because the job commands run inside the shell of the docker container, there is not way to talk to the parent(which is the jenkins host).
This jenkins plugin could have been a life saver, but as mentioned in the first section: distribution of this plugin has been suspended due to unresolved security vulnerabilities.
This is how I copy files from host to docker image using Dockerfile
I have a folder called tomcat
Inside that, I have a tar file and Dockerfile
Commands to do the whole process just for understanding
$ pwd
/home/user/Documents/dockerfiles/tomcat/
$ ls
apache-tomcat-7.0.84.tar.gz Dockerfile
Sample Docker file:
FROM ubuntu_docker
COPY apache-tomcat-7.0.84.tar.gz /home/test/
...
Docker commands:
$ docker build -it testserver .
$ docker run -itd --name test1 testserver
$ docker exec -it bash
Now you are inside docker container
# ls
apache-tomcat-7.0.84.tar.gz
As you can see I am able to copy apache-tomcat-7.0.84.tar.gz from host to Docker container.
Notice the Docker Documentation first line which you have shared
The path must be inside the context of the build;
So as long as the path is reachable during build you can copy.
Another way of doing this would be using volume
docker run -itd -v $(pwd)/somefolder:/home/test --name test1 testserver
Notice -v parameter
You are telling Docker to mount Current_Directory/somefolder to Docker's path at /home/test
Once the container is up and running you can simply copy any file to $(pwd)/somefolder and it will get copied
inside container at /home/test

Docker: Why does my home directory disappear after the build?

I have a simple docker file:
FROM ubuntu:16.04
MAINTAINER T-vK
RUN useradd -m -s /bin/bash -g dialout esp
USER esp
WORKDIR /home/esp
COPY ./entrypoint_script.sh ./entrypoint_script.sh
ENTRYPOINT ["/home/esp/entrypoint_script.sh"]
when I run docker build . followed by docker run -t -i ubuntu and look for the directory /home/esp it is not there! The whole directory including it's files seem to be gone.
Though, when I add RUN mkdir /home/esp to my docker file, it won't build telling me mkdir: cannot create directory '/home/esp': File exists.
So what am I misunderstanding here?
I tested this on Debian 8 x64 and Ubuntu 16.04 x64.
With Docker version 1.12.2
Simply change you Docker build command to:
docker build -t my-docker:dev .
And then to execute:
docker run -it my-docker:dev
Then you'll get what you want. you didn't tag docker build so you're actually running Ubuntu image.

`docker cp` doesn't copy file into container

I have a dockerized project. I build, copy a file from the host system into the docker container, and then shell into the container to find that the file isn't there. How is docker cp supposed to work?
$ docker build -q -t foo .
Sending build context to Docker daemon 64 kB
Step 0 : FROM ubuntu:14.04
---> 2d24f826cb16
Step 1 : MAINTAINER Brandon Istenes <redacted#email.com>
---> Using cache
---> f53a163ef8ce
Step 2 : RUN apt-get update
---> Using cache
---> 32b06b4131d4
Successfully built 32b06b4131d4
$ docker cp ~/.ssh/known_hosts foo:/root/.ssh/known_hosts
$ docker run -it foo bash
WARNING: Your kernel does not support memory swappiness capabilities, memory swappiness discarded.
root#421fc2866b14:/# ls /root/.ssh
root#421fc2866b14:/#
So there was some mix-up with the names of images and containers. Obviously, the cp operation was acting on a different container than I brought up with the run command. In any case, the correct procedure is:
# Build the image, call it foo-build
docker build -q -t foo-build .
# Create a container from the image called foo-tmp
docker create --name foo-tmp foo-build
# Run the copy command on the container
docker cp /src/path foo-tmp:/dest/path
# Commit the container as a new image
docker commit foo-tmp foo
# The new image will have the files
docker run foo ls /dest
You need to docker exec to get into your container, your command creates a new container.
I have this alias to get into the last created container with the shell of the container
alias exec_last='docker exec -it $(docker ps -lq) $(docker inspect -f {{'.Path'}} $(docker ps -lq))'
What docker version are you using? As per Docker 1.8 cp supports copying from host to container:
• Copy files from host to container: docker cp used to only copy files from a container out to the host, but it now works the other way round: docker cp foo.txt mycontainer:/foo.txt
Please note the difference between images and containers. If you want that every container that you create from that Dockerfile contains that file (even if you don't copy afterward) you can use COPY and ADD in the Dockerfile. If you want to copy the file after the container is created from the image, you can use the docker cp command in version 1.8.

Does anyone have a working trivial example of a Dockerfile & command line that mount an external directory from linux into the docker image?

All I'm looking for is a Dockerfile & docker build+run commands on linux to view /var/tmp via a mount point within the container. The issues here are all complicated cases or involve OS/X & Windows or trying to do more than simply mount a volume. I my case I've simply tried to mount /var/tmp onto
/foobar of a busybox image, run a container with the image, and use "ls /foobar" to see the contents.
Running "Docker version 1.6.1, build 97cd073" on linux 4.0.1 w/ aufs
using a local repostory.
http://docs.docker.com/userguide/dockervolumes/
Notes that:
Mount a Host Directory as a Data Volume
In addition to creating a volume using the
-v flag you can also mount a directory from
your Docker daemon's host into a container.
<snip>
$ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp
training/webapp python app.py
This will mount the host directory, /src/webapp,
into the container at /opt/webapp.
Note: If the path /opt/webapp already exists
inside the container's image, its contents
will be replaced by the contents of
/src/webapp on the host to stay consistent
with the expected behavior of mount
This is very useful for testing, for example we can mount our source
code inside the container and see our application at work as we change
the source code. The directory on the host must be specified as an
absolute path and if the directory doesn't exist Docker will
automatically create it for you.
I'm using a local repository that acquires its "/data" directory
via the "-v" switch to docker run:
docker run -d -p 5000 \
-v '/var/lib/docker/registry:/data/registry/storage' \
'kampka/registry';
This seems to work as the hosts's /var/lib/docker/registry directory
gets entries added to it.
So I try a simple test for myself: build a minimal copy of busybox
with access to /var/tmp on the host system.
FROM localhost:5000/lembark/busybox
MAINTAINER lembark#wrkhors.com
VOLUME [ "/foobar" ]
ENV PATH /bin
WORKDIR /
ENTRYPOINT [ "/bin/sh" ]
At that point running "docker build" executes the VOLUME command, but does not create the mount point:
$ docker build --tag="localhost:5000/lembark/hak" . ;
Sending build context to Docker daemon 7.68 kB
Sending build context to Docker daemon
Step 0 : FROM localhost:5000/lembark/busybox
---> c1a1f5abbf79
Step 1 : MAINTAINER lembark#wrkhors.com
---> Using cache
---> b46677881767
Step 2 : VOLUME /foobar
---> Running in 7127bdbcfb56
---> bcf9c3f1c441
Removing intermediate container 7127bdbcfb56
Step 3 : ENV PATH /bin
---> Running in 89f92c815860
---> 780fea54a67f
Removing intermediate container 89f92c815860
Step 4 : WORKDIR /
---> Running in aa3871c408a1
---> 403190e9415b
Removing intermediate container aa3871c408a1
Step 5 : ENTRYPOINT /bin/sh
---> Running in 4850561f7ebd
---> 77c32530b4a9
Removing intermediate container 4850561f7ebd
Successfully built 77c32530b4a9
The "VOLUME /foobar" in Step 2 seems to indicate that a mount point
should be available at runtime.
At that point using either of
docker run --rm -t -i localhost:5000/lembark/hak;
docker run --rm -t -i -v /foobar localhost:5000/lembark/hak;
docker run --rm -t -i -v /var/tmp:/foobar localhost:5000/lembark/hak;
leaves me with:
# ls -al /foobar
ls: /foobar: No such file or directory
Adding a mkdir before the VOLUME leaves me with a /foobar
directory with an anonymous volume, not the mapping from /var/tmp:
...
RUN [ "mkdir", "/foobar" ]
VOLUME [ "/foobar" ]
or
# made a local ./foobar directory, added that to the image.
COPY [ "foobar", "/foobar" ]
Bofh of these leave with /foobar, but no way to map any external
directory to it. Instead I keep getting anonymous volumes:
# mount | grep foobar;
/dev/mapper/vg00-var--lib on /var/lib/docker/aufs/mnt/dd12f3e11a6fcb88627412a041b7c910e4d32dc1bf0c15330899036c59d7b3d9/foobar type xfs (rw,noatime,nodiratime,attr2,inode64,logbufs=8,logbsize=256k,logdev=/dev/vg02/var-lib-extlog,noquota)
No combination with or without each of mkdir, VOLUME, COPY, or -v leaves me viewing /var/tmp undef /foobar.
thanks
This works for me
Dockerfile
FROM busybox:latest
VOLUME /foo/bar
Command
$ docker build -t bb .
Sending build context to Docker daemon 2.048 kB
Sending build context to Docker daemon
Step 0 : FROM busybox:latest
---> 8c2e06607696
Step 1 : VOLUME /foo/bar
---> Running in a94615dd3353
---> fa500ba91831
Successfully built fa500ba91831
$ docker run -it -v /tmp:/foobar bb ls /foobar
[contents of my /tmp directory]
Note that you don't need the VOLUME command to do this. docker run -it -v /tmp:/foobar busybox:latest ls /foobar works just as well.
I did the same as #Nathaniel Waisbrot.
vagrant#vagrant:/code/busybox$ ls
Dockerfile
vagrant#vagrant:/code/busybox$ docker build -t busybox:0.0.1 .
Sending build context to Docker daemon 9.216 kB
Sending build context to Docker daemon
Step 0 : FROM busybox:latest
---> 8c2e06607696
Step 1 : VOLUME /foobar
---> Running in 0b99eac833aa
---> a93b5ae5de5f
Removing intermediate container 0b99eac833aa
Successfully built a93b5ae5de5f
vagrant#vagrant:/code/busybox$ docker run -i -v /code/busybox/:/foobar busybox:0.0.1 ls /foobar
Dockerfile
vagrant#vagrant:/code/busybox$
Here is the Dockerfile I used to build the image.
FROM busybox:latest
VOLUME [ "/foobar" ]
Here you can see that I list my /code/busybox directory to show you the files in it.
Then I build the image.
Then I run the image and map my /code/busybox directory the images /foobar directory with -v /code/busybox:/foobar.
I then execute the image I built with the ls command and the directory I want to list ls /foobar.
The results are printed to the console.
Just to be clear I think there are a few items wrong with what you are doing. Which is why you aren't seeing the files.
FROM localhost:5000/lembark/busybox
MAINTAINER lembark#wrkhors.com
VOLUME [ "/foobar" ]
ENV PATH /bin
WORKDIR /
ENTRYPOINT [ "/bin/sh" ]
You are creating your images from an image on your pc. What is that image from. That could be one issue since the image you are building inherits it's base properties and O/S from that image.
The WORKDIR command sets the working directory. Since you aren't importing anything you don't need this command.
You are overriding the ENTRYPOINT or command that is execute each time the container is initialized by setting the ENTRYPOINT to ['/bin/sh']. That could be why your command is not executing. You run the container and that entry point executes.
Please see the example above. That is a proper Dockerfile and will compile and provide the console output listed above and does exactly what you are asking. Without seeing the Dockerfile for the image you are building your newest image from it's hard to determine exactly what other pieces you have incorrect.

Resources