why creating docker container with custom user republishes derived images? - docker

I am reading an article on docker security about running docker processes as non-root user.it states that:
FROM openjdk:8-jdk
RUN useradd --create-home -s /bin/bash user
WORKDIR /home/user
USER user
This is simple, but forces us to republish all these derived images,
creating a maintenance nightmare.
1) what does it mean by republishing derived images?
2) How is this a maintenance nightmare?
3) Isn't this a common practice as most examples on internet user similar method to run docker as non-root

Say I have an application
FROM openjdk:8-jre
COPY myapp.jar /
CMD ["java", "-jar", "/myapp.jar"]
Now, I want to use your technique to have a common non-root user. So I need to change this Dockerfile to
FROM my/openjdk:8-jre # <-- change the base image name
USER root # <-- change back to root for file installation
COPY myapp.jar ./
USER user # <-- use non-root user at runtime
CMD ["java", "-jar", "./myapp.jar"]
Further, suppose there's a Java security issue and I need to update everything to a newer JRE. If I'm using the standard OpenJDK image, I just need to make sure I've docker pulled a newer image and then rebuild my application image. But if I'm using your custom intermediate image, I need to rebuild that image first, then rebuild the application. This is where the maintenance burden comes in.
In my Docker images I tend to just RUN adduser and specify the USER in the image itself. (They don't need a home directory or any particular shell, and they definitely should not have a host-dependent user ID.) If you broadly think of a Dockerfile as having three "parts" – setting up OS-level dependencies, installing the application, and defining runtime parameters – I generally put this in the first part.
FROM openjdk:8-jre # <-- standard Docker Hub image
RUN adduser user # <-- add the user as a setup step
WORKDIR /app
COPY myapp.jar . # <-- install files at root
USER user # <-- set the default runtime user
CMD ["java", "-jar", "/app/myapp.jar"]
(Say your application has a security issue. If you've installed files as root and are running the application as non-root, then the attacker can't overwrite the installed application inside the container.)

Related

Building Docker image as non root user

New here, was wondering if someone had experience with building images as non root user?
I am building Kotlin project, (2 step build) and my goal is now to build it as non root user. Here is what my Dockerfile looks like. Any help would be appreciated:
# Build
FROM openjdk:11-jdk-slim as builder
# Compile application
WORKDIR /root
COPY . .
RUN ./gradlew build
FROM openjdk:11-jre-slim
# Add application
COPY --from=builder /root/build/libs/*.jar ./app.jar
# Set the build version
ARG build_version
ENV BUILD_VERSION=$build_version
COPY docker-entrypoint.sh /
RUN chmod 777 /docker-entrypoint.sh
CMD /docker-entrypoint.sh
In order to use Docker, you don't need to be a root user, you just need to be inside of the docker user group.
On Linux:
If there is not already a docker group, you can create one using the command sudo groupadd docker.
Add yourself and any other users you would like to be able to access docker to this group using the command sudo usermod -aG docker [username of user].
Relog, so that Linux can re-evaluate user groups.
If you are not trying to run the command as root, but rather want to run the container as non-root, you can use the following DOCKERFILE contents (insert after FROM but before anything else.)
# Add a new user "john" with user id 8877
RUN useradd -u 8877 john
# Change to non-root privilege
USER john

Reuse user in multi-stage Dockerfile

As you know, for security reasons, isn't good to use root user execept if you need it. I have this Dockerfile that I use with multi-stage steps
FROM golang:latest AS base
WORKDIR /usr/src/app
# Create User and working dir
RUN addgroup --gid 42000 app
RUN useradd --create-home --uid 42000 --gid app app
RUN chown -R app:app /usr/src/app
RUN chmod 755 /usr/src/app
# Compile stage based on Debian
FROM base AS builder
USER app
# Copy form computer to current WORKDIR container
COPY . .
# Exit immediately if a command exits with a non-zero status
RUN set -xue && \
make go-build-linux
# Final stage
FROM debian:latest
USER app
EXPOSE 14001
RUN apt-get update && \
apt-get install -y ca-certificates
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app/server .
CMD ["./server"]
The problem is that I'm trying to reuse the user in all steps but seems to be that the user scope is by stage and I don't know how to reuse it.
Do you know how I can reuse a user in a multi-stage Dockerfile and try to avoid to use root user from Dockerfile?
Thanks!
TL;DR;
It is not possible to re-use the same user in multiple stages of the docker build without re-creating the user (same UID and GID at least) in each stage as each FROM is starting from a clean slate FROM image in which a user UID=42000 and GID=42000 is unlikely to already exist.
I am not aware of any recommendation against building as the root user inside a container. It is recommended to run services as unprivileged users however certain containers processes must be run as the root user (i.e. sshd):
The best way to prevent privilege-escalation attacks from within a container is to configure your container’s applications to run as unprivileged users. For containers whose processes must run as the root user within the container, you can re-map this user to a less-privileged user on the Docker host. The mapped user is assigned a range of UIDs which function within the namespace as normal UIDs from 0 to 65536, but have no privileges on the host machine itself.
Tip: The Haskell Dockerfile Linter will complain if the last user is root which you can configure as a git pre-commit hook to catch things like that before committing teh codez.

Install Bash on scratch Docker image

I am presently working with a third party Docker image whose Dockerfile is based on the empty image, starting with the FROM scratch directive.
How can Bash be installed on such image? I tried adding some extra commands to the Dockerfile, but apparently the RUN directive itself requires Bash.
When you start a Docker image FROM scratch you get absolutely nothing. Usually the way you work with one of these is by building a static binary on your host (or these days in an earlier Dockerfile build stage) and then COPY it into the image.
FROM scratch
COPY mybinary /
ENTRYPOINT ["/mybinary"]
Nothing would stop you from creating a derived image and COPYing additional binaries into it. Either you'd have to specifically build a static binary or install a full dynamic library environment.
If you're doing this to try to debug the container, there is probably nothing else in the image. One thing this means is that the set of things you can do with a shell is pretty boring. The other is that you're not going to have the standard tool set you're used to (there is not an ls or a cp). If you can live without bash's various extensions, BusyBox is a small tool designed to be statically built and installed in limited environments that provides minimal versions of most of these standard tools.
The question is old but I see a similar question and came here, SO posting to deal with such case below.
I am presently working with a third party Docker image whose
Dockerfile is based on the empty image, starting with the FROM scratch
directive.
As mentioned by #David there is nothing in such image that is based on scratch, If the image is based on the scratch image they just copy the binaries to image and that's it.
So the hack around with such image is to copy the binaries into your extend image and use them in your desired docker image.
For example postgres_exporter
FROM scratch
ARG binary
COPY $binary /postgres_exporter
EXPOSE 9187
ENTRYPOINT [ "/postgres_exporter" ]
So this is based on scratch and I can not install bash or anything else I can just copy binaries to run.
So here is the work, use them as the multi-stage base image, copy the binaries and installed packages in your docker images.
Below we need to add wait-for-it
FROM wrouesnel/postgres_exporter
# use the above base image
FROM debian:7.11-slim
RUN useradd -u 20001 postgres_exporter
USER postgres_exporter
#copy binires
COPY --from=0 /postgres_exporter /postgres_exporter
EXPOSE 9187
COPY wait-for-it.sh wait-for-it.sh
USER root
RUN chmod +x wait-for-it.sh
USER postgres_exporter
RUN pwd
ENTRYPOINT ["./wait-for-it.sh", "db:5432", "--", "./postgres_exporter"]

Preventing access to code inside of a docker container

I am wanting to build a production ready image for clients to use and I am wondering if there is a way to prevent access to my code within the image?
My current approach is storing my code in /root/ and creating a "customer" user that only has a startup script in their home dir.
My Dockerfile looks like this
FROM node:8.11.3-alpine
# Tools
RUN apk update && apk add alpine-sdk
# Create customer user
RUN adduser -s /bin/ash -D customer
# Add code
COPY ./code /root/code
COPY ./start.sh /home/customer/
# Set execution permissions
RUN chown root:root /home/customer/start.sh
RUN chmod 4755 /home/customer/start.sh
# Allow customer to execute start.sh
RUN echo 'customer ALL=(ALL) NOPASSWD: /home/customer/start.sh' | EDITOR='tee -a' visudo
# Default to use customer
USER customer
ENTRYPOINT ["sudo","/home/customer/start.sh"]
This approach works as expected, if I were to enter the container I won't be able to see the codebase but I can start up services.
The final step in my Dockerfile would be to either, set a password for the root user or remove it entirely.
I am wondering if this is a correct production flow or am I attempting to use docker for something it is not meant to?
If this is the correct, what other things should I lock down?
any tips appreciated!
Anybody who has your image can always do
docker run -u root imagename sh
Anybody who can run Docker commands at all has root access to their system (or can trivially give it to themselves via docker run -v /etc:/hostetc ...) and so can freely poke around in /var/lib/docker to see what's there. It will have all of the contents of all of the images, if scattered across directories in a system-specific way.
If your source code is actually secret, you should make sure you're using a compiled language (C, Go, Java kind of) and that your build process doesn't accidentally leak the source code into the built image, and it will be as secure as anything else where you're distributing binaries to end users. If you're using a scripting language (Python, JavaScript, Ruby) then intrinsically the end user has to have the code to be able to run the program.
Something else to consider is the use of docker container export. This would allow anyone to export the containers file system, and therefore have access to code files.
I believe this bypasses removing the sh/bash and any user permission changes as others have mentioned.
You can protect your source-code even it can't be have a build stage or state,By removing the bash and sh in your base Image.
By this approach you can restrict the user to not enter into your docker container and Image either through these commands
docker (exec or run) -it (container id) bash or sh.
To have this kind of approach after all your build step add this command at the end of your build stage.
RUN rm -rf bin/bash bin/sh
you can also refer more about google distroless images which follow the same approach above.
You can remove the users from the docker group and create sudos for the docker start and docker stop

Best practice for running a non trusted .net core application inside a docker container

Let's say I want to run inside a docker container some third party .net core application I don't fully trust.
To simplify, let's assume that application is the simple Hello World console app generated by dotnet new. This is just the 2 files Program.cs and project.json.
Right now I have tried the following approach:
Copy that application into some folder of my host
Create a new container using the microsoft/dotnet image, mounting that folder as a volume, running a specific command for building and running the app:
$ docker run --rm -it --name dotnet \
-v /some/temp/folder/app:/app \
microsoft/dotnet:latest \
/bin/sh -c 'cd /app && dotnet restore && dotnet run'
I was also considering the idea of having a predefined dockerfile with microsoft/dotnet as the base image. It will basically embed the application code, set it as the working dir and run the restore, build and run commands.
FROM microsoft/dotnet:latest
COPY . /app
WORKDIR /app
RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]
ENTRYPOINT ["dotnet", "run"]
I could then copy the predefined dockerfile into the temp folder, build a new image just for that particular application and finally run a new container using that image.
Is the dockerfile approach overkill for simple command line apps? What would be the best practice for running those untrusted applications? (which might be one I completely ignore)
EDIT
Since I will discard the container after it runs and the docker command will be generated by some application, I will probably stay with the first option of just mounting a volume.
I have also found this blog post where they built a similar sanbox environment and ended up following the same mounted volume approach
As far I know, what happens in docker, stays in docker.
When you link a volume (-v) to the image, the process can alter the files in the folder you mounted. But only there. The process cannot follow any symlinks or step out of the mounted folder since it's forbidden for obvious security reasons.
When you don't link anything and copy the application code into the image, it's definitely isolated.
The tcp/udp ports exposition is up to you as well as memory/cpu consumption and you can even isolate the process from internet e.g. like that
Therefore, I don't think that using dockerfile is an overkill and I'd summarize it like this:
When you want to run it once, try it and forget it - use command line if you are ok with typing the nasty command. If you plan to use it more - create a Dockerfile. I don't see much space for declaring "best practice" here, considering it an question of personal preferences.

Resources