I'm trying to write a Dockerfile file to run Pydio community edition. I've an almost working Dockerfile.
RUN mv pydio-core-${PYDIO_VERSION} /var/www/pydio-core
RUN chmod -R 770 /var/www/pydio-core
RUN chmod -R 777 /var/www/pydio-core/data/files/ /var/www/pydio-core/data/personal/
RUN chown -R www-data:www-data /var/www/pydio-core
VOLUME /var/www/pydio-core/data/files
VOLUME /var/www/pydio-core/data/personal
This works except that when the container is started for the first time, the access rights of the files and personal folders is 755 and their owner is not www-data but 1000. So once started, I must connect the container to fix permissions (770) and ownership (www-data) and everything works.
I just wonder if it may have something in my Dockerfile which could explain the problem, or if the issue probably comes from the Pydio source code itself.
Related
Seems like a basic issue but couldnt find any answers so far ..
When using ADD / COPY in Dockerfile and running the image on linux, the default file permission of the file copied in the image is 644. The onwner of this file seems to be as 'root'
However, when running the image, a non-root user starts the container and any file thus copied with 644 permission cannot execute this copied/added file and if the file is executed at ENTRYPOINT it fails to start with permission denied error.
I read in one of the posts that COPY/ADD after Docker 1.17.0+ allows chown but in my case i dont know who will be the non-root user starting so i cannot set the permission as that user.
I also saw another work around to ADD/COPY files to a different location and use RUN to copy them from the temp location to actual folder like what am doing below. But this approach doesnt work as the final image doesnt have the files in /otp/scm
#Installing Bitbucket and setting variables
WORKDIR /tmp
ADD atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz .
COPY bbconfigupdater.sh .
#Copying Entrypoint script which will get executed when container starts
WORKDIR /tmp
COPY entrypoint.sh .
RUN ls -lrth /tmp
WORKDIR /opt/scm
RUN pwd && cp /tmp/bbconfigupdater.sh /opt/scm \
&& cp /tmp/entrypoint.sh /opt/scm \
&& cp -r /tmp/atlassian-bitbucket-${BITBUCKET_VERSION} /opt/scm \
&& chgrp -R 0 /opt/ \
&& chmod -R 755 /opt/ \
&& chgrp -R 0 /scm/bitbucket \
&& chmod -R 755 /scm/bitbucket \
&& ls -lrth /opt/scm && ls -lrth /scmdata
Any help is appreciated to figure out how i can get my entrypoint script copied to the desired path with execute permissions set.
The default file permission is whatever the file permission is in your build context from where you copy the file. If you control the source, then it's best to fix the permissions there to avoid a copy-on-write operation. Otherwise, if you cannot guarantee the system building the image will have the execute bit set on the files, a chmod after the copy operation will fix the permission. E.g.
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh
A better option with newer versions of docker (and which didn't exist when this answer was first posted) is to use the --chmod flag (the permissions must be specified in octal at last check):
COPY --chmod=0755 entrypoint.sh .
You do not need to know who will run the container. The user inside the container is typically configured by the image creator (using USER) and doesn't depend on the user running the container from the docker host. When the user runs the container, they send a request to the docker API which does not track the calling user id.
The only time I've seen the host user matter is if you have a host volume and want to avoid permission issues. If that's your scenario, I often start the entrypoint as root, run a script called fix-perms to align the container uid with the host volume uid, and then run gosu to switch from root back to the container user.
A --chmod flag was added to ADD and COPY instructions in Docker CE 20.10. So you can now do.
COPY --chmod=0755 entrypoint.sh .
To be able to use it you need to enable BuildKit.
# enable buildkit for docker
DOCKER_BUILDKIT=1
# enable buildkit for docker-compose
COMPOSE_DOCKER_CLI_BUILD=1
Note: It seems to not be documented at this time, see this issue.
I have several files in a directory on the host machine which I am trying to copy to the container and also have some run commands inside my docker-compose.
The first set up until the crowd section stats woks fine, but anything from the crown jar down just fails and doesn't work. I tried to run the manial docker cp command to copy the files from host to the container and that works. Can someone please shed some light on this?
This is a part of my Dockerfile:
WORKDIR /usr/local/tomcat
USER root
COPY server.xml conf/server.xml
RUN chmod 660 conf/server.xml
USER root
ADD tomcat.keystore /usr/local/tomcat/
RUN chmod 644 tomcat.keystore
RUN chown root:staff /usr/local/tomcat/tomcat.keystore
ADD crowd-auth-filter-1.0.0.jar /usr/local/tomcat/webapps/guacamole/WEB-INF/lib/
ADD crowd-filter.properties /usr/local/tomcat/webapps/guacamole/WEB-INF/lib/
RUN chmod 644 crowd-filter.properties
ADD web.xml /usr/local/tomcat/webapps/guacamole/WEB-INF/
RUN /usr/local/tomcat/bin/shutdown.sh
RUN /usr/local/tomcat/bin/startup.sh
Thanks
Im pulling a wordpress image and everything is working fine but when I go to the wordpress editor page the following error is on the top of screen.
Autoptimize cannot write to the cache directory (/var/www/html/wp-content/cache/autoptimize/), please fix to enable CSS/ JS optimization!
I assumed RUN chown -R www-data:www-data wp-content/ would solve that issue but its not working. Any ideas would be appreciated. My Dockerfile is below.
FROM wordpress:4.9.2-php7.2-apache
RUN chown -R www-data:www-data wp-content/
COPY ./src /var/www/html/
# Install the new entry-point script
COPY secrets-entrypoint.sh /secrets-entrypoint.sh
RUN chmod +x /secrets-entrypoint.sh
ENTRYPOINT ["/secrets-entrypoint.sh"]
EXPOSE 80
CMD ["apache2-foreground"]
I'm not sure the exact permission but you don't want to be writing inside a container so you should define a volume. Since you don't need the data to persist, you can do this in your dockerfile like:
VOLUME /var/www/html/wp-content/cache
This will set up a default volume where Docker will choose the location on your host, but you can mount it to a named volume instead when the container is created if you like.
You could also use a tmpfs volume which is good for things like cache files.
I have created a very basic Dockerfile which wraps the official Jenkins Docker image. I am running an instance of my image and have come across the following problem:
I start and instance of my image and create a file in my instance in the /tmp directory(any directory will work) called tmp.txt.
I go to the "Script Console" of my running Jenkins instance and in the console, I type in the following code:
f = new File('/tmp/')
println f.list()
I expect to see a list of files including my newly created /tmp/tmp.txt but that file is not in the list. This is symbolic of the problem that is blocking me which is that I want to call evaluate() on a groovy script from a Jenkinsfile but that file is not script is not visible to Jenkins.
My gut feeling is that this has something to do with Docker file system layers. Something about the fact that since Jenkins is installed at a lower docker file system layer in the base image, it cannot access files created within the running instance layer above it. The behavior seems very weird but somewhat understandable.
Has anyone encountered this issue before? If so, how were you able to resolve it?
FROM jenkins
ENV JAVA_OPTS="-Xmx8192m"
USER root
RUN mkdir /var/log/jenkins
RUN chown -R jenkins:jenkins /var/log/jenkins
RUN mkdir /var/cache/jenkins
RUN chown -R jenkins:jenkins /var/cache/jenkins
RUN useradd -ms /bin/bash jenkadm
USER jenkadm
WORKDIR /home/jenkadm
COPY id_rsa /home/jenkadm/.ssh/
COPY id_rsa.pub /home/jenkadm/.ssh/
USER root
RUN mkdir -p /opt/staples/ci-tools
RUN chown jenkadm:jenkadm /opt/staples/*
USER jenkins
ENV JENKINS_OPTS="--handlerCountMax=300 --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war"
I have a problem with my Dockerfile. I installed py-spidev on my Docker container to fetch data from a sensor.
Everything works and is installed in the container.
My only problem is that the folder /dev/spi* has only read-write rights for root. I need reading rights on www-data. If I execute a chmod 666 /dev/spi* on a running container, everthing works fine. I want that the chmod to be executed in the Dockerfile.
https://github.com/legionth/westfall-pi/blob/master/Dockerfile
In your Dockerfile, just add this line:
RUN chmod 666 /dev/spi*