Docker container file permissions - docker

I have a java application that saves a CSV file to the root folder of the application. I am trying to create a docker image of this and run it as a container. However, I want a non-root user with ID of 1010 to be able to access this file and not root. I get errors when trying to specify USER 1010 in my dockerfile
FROM adoptjdk (placeholder)
COPY ./myapp.jar /app/
USER 1010
WORKDIR /opt
EXPOSE PORTNO
That's just the basics of the dockerfile, essentially I want user 1010 to be able to access the CSV file that my java application creates. I am not sure where it saves my CSV file when it is run through docker.

If your application shall write to / inside the container, and your application shall be running as user id 1010, then just set the filesystem privileges accordingly. I chime in that it may not be the recommended setup, but it is not impossible.
So put in your Dockerfile lines like
RUN chmod 777 /
or to be a bit smoother check the GID of / with
RUN ls -ln /
RUN chmod 775 /
then ensure your java application runs with uid 1010 and the GID you saw from the filesystem. In short, such things are the same, be it inside the container or outside.

Related

Docker volume file permissions

I have create a docker container with this command:
docker run -d -p 20001:80 -v /home/me/folder1/:/usr/local/apache2/htdocs/ httpd:2.4
This container contains scripts which create files and directories in /usr/local/apache2/htdocs/ folder.
I can see this files on host computer in /home/me/folder1/ folder.
I have tried to open one of this files because i want to write something.
I cannot do that because i do not have write permission on this files. This is because they are owned by root user.
What can i do in order to make this files writable be "me" user ? I want to do that automaticaly
Thanks a lot
you have to do
sudo chmod +x nameofscript.sh
whit this command execute by master you set this scripts executable for all users

why creating docker container with custom user republishes derived images?

I am reading an article on docker security about running docker processes as non-root user.it states that:
FROM openjdk:8-jdk
RUN useradd --create-home -s /bin/bash user
WORKDIR /home/user
USER user
This is simple, but forces us to republish all these derived images,
creating a maintenance nightmare.
1) what does it mean by republishing derived images?
2) How is this a maintenance nightmare?
3) Isn't this a common practice as most examples on internet user similar method to run docker as non-root
Say I have an application
FROM openjdk:8-jre
COPY myapp.jar /
CMD ["java", "-jar", "/myapp.jar"]
Now, I want to use your technique to have a common non-root user. So I need to change this Dockerfile to
FROM my/openjdk:8-jre # <-- change the base image name
USER root # <-- change back to root for file installation
COPY myapp.jar ./
USER user # <-- use non-root user at runtime
CMD ["java", "-jar", "./myapp.jar"]
Further, suppose there's a Java security issue and I need to update everything to a newer JRE. If I'm using the standard OpenJDK image, I just need to make sure I've docker pulled a newer image and then rebuild my application image. But if I'm using your custom intermediate image, I need to rebuild that image first, then rebuild the application. This is where the maintenance burden comes in.
In my Docker images I tend to just RUN adduser and specify the USER in the image itself. (They don't need a home directory or any particular shell, and they definitely should not have a host-dependent user ID.) If you broadly think of a Dockerfile as having three "parts" – setting up OS-level dependencies, installing the application, and defining runtime parameters – I generally put this in the first part.
FROM openjdk:8-jre # <-- standard Docker Hub image
RUN adduser user # <-- add the user as a setup step
WORKDIR /app
COPY myapp.jar . # <-- install files at root
USER user # <-- set the default runtime user
CMD ["java", "-jar", "/app/myapp.jar"]
(Say your application has a security issue. If you've installed files as root and are running the application as non-root, then the attacker can't overwrite the installed application inside the container.)

Shared Volume Docker Permissions

I am using docker-compose to create an abundance of Docker containers. All of the containers have a shared volume.
volumes:
- ${PHP_SERVICES_FOLDER}:/var/www/web
The docker containers are as follows.
Jenkins(FROM jenkins/jenkins:latest) - This writes to the shared volume
Nginx(FROM nginx) - This reads from the shared volume and uses the php-fpm container
PHP-FPM(FROM php:7.2-fpm)
With the volume's files having permissions 777 Nginx and PHP can read, write and execute the files but as soon as I trigger a build in Jenkins which updates files in the volume.
I think the reason it works when the permissions are 777 is because that allows 'other' users full access to the volume.
How can I have Nginx, PHP-FPM and Jenkins use the same user to read, write and execute files in that volume?
You could create an user in each of the Dockerfiles using the same uid and then give permissions to user with those uids.
For example:
# your dockerfile
RUN groupadd -g 799 appgroup
RUN useradd -u 799 -g appgroup shareduser
USER shareduser
Then you simply need to chown everything in the volume with the newly created uid (in any container or at host, as uids are shared between container and host:
chown -R 799:799 /volume/root/

External SpringBoot properties file on Docker

I'm currently trying to automatically externalize my application.yml file from my Spring Boot app's default location /src/main/resouces/application.yml. I know currently Spring Cloud Server is a good or prefered way to do so, but that may not be an option for my case at this time.
I'm currently trying to extract the .yml file from it's .jar and then copy it to my desired folder.
Unfortunately, I don't seem to get it at all! At some point I try to run RUN ls -lrt /tmp/config and even though I get a success message from COPY command, it's always empty.
This is my curreny setup:
Dockerfile:
FROM openjdk:8-jdk-alpine
VOLUME ["/tmp", "/tmp/config", "/tmp/logs"]
ADD /target/*.jar app.jar
RUN apk add --update unzip && unzip app.jar "*application.yml" && ls -lrt
RUN ls -lrt /BOOT-INF/classes
RUN cp /BOOT-INF/classes/application.yml tmp/config
RUN ls -lrt tmp/config
# ----> Total 0
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar", "--spring.config.location=file:/tmp/config/application.yml"]
And in my docker-compose.yml I have a mapping for all three VOLUMES I'm defining above.
Do you guys have any idea on how to solve this issue without making the user drop the .yml file in the directory at first deploy?
Best regards,
Enrico Bergamo
In the end I have just decided keeping it simple and have a volume mounted just for an additional application.yml file. Before running the container, I'm creating the directory and placing my new .yml file in this dir and it did the trick :)
FROM openjdk:8-jre-alpine
VOLUME ["/tmp", "/tmp/config", "/tmp/logs"]
ADD /target/*.jar app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar", "--spring.config.location=classpath:/application.yml,file:/tmp/config/application.yml"]

Docker mount happens before or after entrypoint execution

I'm building a Docker image to run my Spring Boot based application. I want to have user to be able to feed a run time properties file by mounting the folder containing application.properties into container. Here is my Dockerfile,
FROM java:8
RUN mkdir /app
RUN mkdir /app/config
ADD myapp.jar /app/
ENTRYPOINT ["java","-jar","/app/myapp.jar"]
When kicking off container, I run this,
docker run -d -v /home/user/config:/app/config myapp:latest
where /home/user/config contains the application.properties I want the jar file to pick up during run time.
However this doesn't work, the app run doesn't pick up this mounted properties file, it's using the default one packed inside the jar. But when I exec into the started container and manually run the entrypoint cmd again, it works as expected by picking up the file I mounted in. So I'm wondering is this something related to how mount works with entrypoint? Or I just didn't write the Dockerfile correctly for this case?
Spring Boot searches for application.properties inside a /config subdirectory of the current directory (among other locations). In your case, current directory is / (docker default), so you need to change it to /app. To do that, add
WORKDIR /app
before the ENTRYPOINT line.
And to answer your original question: mounts are done before anything inside the container is run.

Resources