One of the folders in my project is read only:
ls -la client/cypress
drwxr-xr-x 10 user staff 320 Nov 2 18:06 .
drwx------ 2 root staff 64 Nov 2 17:58 resource <--- this one
I've ignored it in the .dockerignore at the base directory (context)
client/cypress/resource
simple docker def:
WORKDIR /src
COPY client/ client
CMD ["/bin/bash"]
$ docker build . fails with:
error from sender: open client/cypress/resource: permission denied
Running as sudo works, but why won't docker just ignore the file? If i ignore the directory that contains the folder it works fine. (I know i can chmod the file, but it's generated programmatically by other tooling)
The resource folder has ownership as root:staff and permissions as 700. That means only the root user is able to read/write/execute that folder. The group staff has no permissions over that folder.
Running with sudo will work since sudo executes the command using the root user.
Regarding the .dockerignore file: it depends on the permissions the .dockerignore file has. If the ownership is root:staff and that permissions are as 640 or even more restrictive, then if you execute the docker build . as a regular user, that user is not able to read the .dockerignore file.
That being said, not reading the .dockerignore file translates into ignoring that file and its' content.
Related
I have a java application that saves a CSV file to the root folder of the application. I am trying to create a docker image of this and run it as a container. However, I want a non-root user with ID of 1010 to be able to access this file and not root. I get errors when trying to specify USER 1010 in my dockerfile
FROM adoptjdk (placeholder)
COPY ./myapp.jar /app/
USER 1010
WORKDIR /opt
EXPOSE PORTNO
That's just the basics of the dockerfile, essentially I want user 1010 to be able to access the CSV file that my java application creates. I am not sure where it saves my CSV file when it is run through docker.
If your application shall write to / inside the container, and your application shall be running as user id 1010, then just set the filesystem privileges accordingly. I chime in that it may not be the recommended setup, but it is not impossible.
So put in your Dockerfile lines like
RUN chmod 777 /
or to be a bit smoother check the GID of / with
RUN ls -ln /
RUN chmod 775 /
then ensure your java application runs with uid 1010 and the GID you saw from the filesystem. In short, such things are the same, be it inside the container or outside.
I am going through a docker course and have a simple Docker script which sets up an image:
FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app app
USER app
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package*.json ./
RUN npm install
COPY . .
ENV APP_URL=http://api.myapp.com
EXPOSE 3000
CMD ["npm", "start"]
Now, in the script it switches to USER app but when I log in to Docker exec using docker exec -it 187 sh I can ask it whoami and get the response: app which is correct. The problem comes when I try to write a file using the echo command:
echo data > data.txt
sh: can't create data.txt: Permission denied
So then I run ls -la to view files and perms:
/app $ ls -la
total 1456
drwxr-xr-x 1 root root 4096 Oct 20 16:38 .
drwxr-xr-x 1 root root 4096 Oct 20 19:54 ..
-rw-rw-r-- 1 root root 13 Oct 20 13:46 .dockerignore
drwxr-xr-x 7 root root 4096 Mar 9 2021 .git
-rw-r--r-- 1 root root 310 Mar 5 2021 .gitignore
-rw-rw-r-- 1 root root 311 Oct 20 16:38 Dockerfile
-rw-r--r-- 1 root root 3362 Mar 5 2021 README.md
drwxr-xr-x 1 root root 4096 Oct 20 16:38 node_modules
-rw-rw-r-- 1 root root 1434378 Oct 20 16:10 package-lock.json
-rw-r--r-- 1 root root 814 Oct 20 16:10 package.json
drwxr-xr-x 2 root root 4096 Mar 9 2021 public
drwxr-xr-x 2 root root 4096 Oct 20 13:22 src
Which shows that root is the user and group set for these files/dirs. This was obviously intended as we don't want to be logging in as root. So what should I do to be able to add this file to the container? What is the best practice here? Maybe I missed a step somewhere?
Edit: Should the /app be owned by the USER app? If so, what is the point in adding a new user? should I add this to the docker script:
RUN chown app /app
Thanks!
Should the /app be owned by the USER app?
Definitely not. You want to prevent the application from overwriting its own code and static assets, intentionally or otherwise.
So what should I do to be able to add this file to the container?
Create a dedicated directory to hold your application's data. This should be a different directory from the directory with the source code; a subdirectory of your normal application directory is fine. In the Dockerfile, make this directory (only) be owned by your non-root user.
FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app app
# don't switch to this user quite yet
WORKDIR /app
# usual setup and build stuff
COPY package*.json ./
RUN npm ci
COPY . ./
RUN npm build
# create the data directory and set its owner
RUN mkdir data && chown app data
# _now_ switch to the non-root user when running the container
EXPOSE 3000
USER app
CMD ["npm", "start"]
In practice, you probably want to persist the application's data beyond the lifespan of a single container. One approach to this is to use a Docker named volume. If you do this, the volume will be initialized from the image, including its ownership, and so you don't need any special setup here.
docker volume create app-data
docker run -v app-data:/app/data ...
For several reasons you may prefer to use a bind mount (if you need to directly access the files from outside of Docker; it may be easier to back up and restore the files; ...). You can also use the docker run -v option to bind-mount a host directory into a container, but it brings along its host-system numeric uid owner. However, notice that the only thing in the image that has the app owner is the data directory, and the code is otherwise world-readable, so if we set the container to run with the same uid as the host user, this will still work.
docker run -v "$PWD/data:/app/data" -u $(id -u) ...
You should not normally need Docker volumes for your application code (it is contained in the image), nor should you need to build a specific host uid into the image.
I have a Dockerfile:
FROM jenkins/jenkins:lts-centos
USER jenkins
ENV PLUGIN_DIR=$JENKINS_HOME/plugins
RUN mkdir $JENKINS_HOME/plugins
RUN ls -aliF $JENKINS_HOME/
that results in no folder plugins present.
The same result when root user is used.
Workaround for this is to use WORKDIR:
FROM jenkins/jenkins:lts-centos
USER jenkins
WORKDIR $JENKINS_HOME/plugins
RUN chown jenkins:jenkins $JENKINS_HOME/plugins
RUN ls -aliF $JENKINS_HOME/
but it results in plugins folder to be of root.
35539281 drwxr-xr-x 2 root root 6 May 26 21:45 plugins/
Same with user root:
FROM jenkins/jenkins:lts-centos
USER root
WORKDIR $JENKINS_HOME/plugins
RUN chown jenkins:jenkins $JENKINS_HOME/plugins
RUN ls -aliF $JENKINS_HOME/
35539493 drwxr-xr-x 2 root root 6 May 26 21:52 plugins/
The jenkins/jenkins:lts-centos uses $JENKINS_HOME as VOLUME.
This somehow prevents creating a folder with custom ownership or permissions.
Any ideas of how to fix this?
Found a workaround:
Create a folder jenkins/plugins and copy it into the image:
FROM jenkins/jenkins:lts-centos
USER root
COPY --chown=jenkins:jenkins jenkins/ $JENKINS_HOME/
RUN ls -aliF $JENKINS_HOME/
it results in jenkins ownership:
2367864 drwxr-xr-x 2 jenkins jenkins 6 May 27 09:14 plugins/
As this is a workaround, the question still remains - how to do it in a proper way?
I'm setting up a playspace to run Apache Airflow. I want the directory /airflow to be owned and therefore writable by the user Airflow. My Dockerfile looks like this:
FROM salimfadhley/testpython:latest AS base_python
COPY . /project
WORKDIR /project/src
RUN SLUGIFY_USES_TEXT_UNIDECODE=yes python -m pip install -e /project/src
FROM base_python AS application
ENV AIRFLOW_HOME=/airflow
RUN useradd -G sudo -u 1000 airflow
VOLUME /airflow
WORKDIR /airflow
RUN chown airflow:airflow /airflow
USER airflow
Unfortunately, when I try to write to that directory I get an error:
airflow#fc047510b631:/airflow$ touch hello
touch: cannot touch 'hello': Permission denied
airflow#fc047510b631:/airflow$ cd ..
airflow#fc047510b631:/$ ls -l | grep airflow
drwxr-xr-x 2 root root 4096 Feb 12 13:38 airflow
drwxr-xr-x 6 airflow sudo 4096 Feb 12 13:35 project
drwxr-xr-x 4 airflow sudo 4096 Feb 12 11:12 src
Is there a way to fix this so that the directory /airflow in the conainer is a persistent volume which is owned and therefore writable by the user "airflow".
Thanks!
Volumes are mounted with the uid/gid, along with file permissions, of the volume source. If that's a host mount, the uid/gid and permissions on your host directory need to be changed. If that's a named volume, you'll need to modify the permissions inside that named volume.
What you do get from fixing the permissions in your image, as you've done in your Dockerfile, is make new named volumes appear with the correct permissions. Docker will initialize an empty named volume with the contents from your image, including file ownership and permissions. However, once that named volume has been initialized with content, further usage of that named volume will skip the initialization step and you'll see the files and permissions from the previous usage.
One of things we do often is to package all source code in Dockerfile when we build a Docker image.
ADD . /app
How can we avoid including the .git directory in simple way ?
I tried the Unix way of handling this using ADD [^.]* /app/
Complete sample:
docker#boot2docker:/mnt/sda1/tmp/abc$ find .
.
./c
./.git
./Dockerfile
./good
./good/a1
docker#boot2docker:/mnt/sda1/tmp/abc$ cat Dockerfile
FROM ubuntu
ADD [^.]* /app/
docker#boot2docker:/mnt/sda1/tmp/abc$ docker build -t abc .
Sending build context to Docker daemon 4.096 kB
Sending build context to Docker daemon
Step 0 : FROM ubuntu
---> 04c5d3b7b065
Step 1 : ADD [^.]* /app/
d ---> 5d67603f108b
Removing intermediate container 60159dee6ac8
Successfully built 5d67603f108b
docker#boot2docker:/mnt/sda1/tmp/abc$ docker run -it abc
root#1b1705dd66a2:/# ls -l app
total 4
-rw-r--r-- 1 1000 staff 30 Jan 22 01:18 Dockerfile
-rw-r--r-- 1 root root 0 Jan 22 01:03 a1
-rw-r--r-- 1 root root 0 Jan 22 00:10 c
And secondly, it will lose the directory structure, since good\a1 gets changed to a1.
Related source code in Docker is https://github.com/docker/docker/blob/eaecf741f0e00a09782d5bcf16159cc8ea258b67/builder/internals.go#L115
You may exclude unwanted files with the help of the .dockerignore file
How can we avoid including the .git directory in simple way ?
Just create a file called .dockerignore in the root context folder with the following lines
**/.git
**/node_modules
With such lines Docker will exclude directories .git and node_modules from any subdirectory including root. Docker also supports a special wildcard string ** that matches any number of directories (including zero).
And secondly, it will lose the directory structure, since good\a1 gets changed to a1
With .dockerignore it won't
$ docker run -it --rm sample tree /opt/
/opt/
├── Dockerfile
├── c
│ └── no_sslv2.patch
└── good
└── a1
└── README
3 directories, 3 files
Reference to official docs: .dockerignore
Add .dockerignore file in your root directory (syntax like the .gitignore file)
Because .gitignore and .dockerignore do not use the same syntax, I ended up running this before build:
git ls-files --others --ignored --exclude-standard --directory > .dockerignore
printf "%s\n" .git >> .dockerignore
In this way I get a fresh list of actually ignored stuff from single source - what Git itself ignores. I think this is the most reliable way.