Add Security Properties to K8S Pod to Execute NSEnter - docker

I am using a pre-created image from Amazon registry. When I run the image I receive the following error:
nsenter: failed to execute /tmp/test.sh: Permission denied
I set the security context of the pod to privileged and set the runAsUser to 1000
privileged: true
runAsUser: 1000
The error persists and I am not sure why.
Do I need to allow with SELinux specific syscalls?

It seems to me, that you are trying to execute the file /tmp/test.sh inside the container and this fails because of permissions. If this is the case, you could either try altering your build of the image to include something like RUN chmod +x /tmp/test.sh or you need to manually make sure to set the executable flag on the file (with chmod +x /tmp/test.sh). This is just a guess, because as other people have mentioned, the question does not include much details about the error.

Related

How to give proper permission for file system in containers running on OCP?

I have my docker application running on OpenShift. I am facing a permission issue in the container. My docker file looks like this:
.....
RUN chmod +x /tmp/ui-app/isf-management-api
RUN chgrp -R 0 /tmp/ui-app/build/ && \
chmod -R g=u /tmp/ui-app/build/
# Set the entry point
ENTRYPOINT (cd /tmp/ui-app && ./management-api)
USER 65534
EXPOSE 10555
I added chgrp and chmod so that I could create/update the file in the container programmatically. It works correctly on some clusters but some clusters still give the permission issue. After debugging more I found the user on containers are different.
In non-working case :
sh-4.4$ touch 1
touch: cannot touch '1': Permission denied
sh-4.4$ whoami
nobody
sh-4.4$
on the other hand, in the working case it is :
sh-4.4$ whoami
1000630000
sh-4.4$ touch 3
sh-4.4$
But the docker image is the same in both places.
Any idea what's wrong here?
Quoting the docs:
By default, OpenShift Container Platform runs containers using an arbitrarily assigned user ID. This provides additional security against processes escaping the container due to a container engine vulnerability and thereby achieving escalated permissions on the host node.
For an image to support running as an arbitrary user, directories and files that are written to by processes in the image must be owned by the root group and be read/writable by that group. Files to be executed must also have group execute permissions.
The docs then give some example of how to build a Dockerfile that complies with this, as well as how to modify the SecurityContextConstraint if you really must violate this security policy. (Which it doesn't sound like you need to.)
Got the issue. USER 65534 is nobody. DockerFile should contain 1001 as non-root user.

Should I run things inside a docker container as non root for safety?

I already run my docker build and docker run without sudo. However, when I launch a process inside a docker container, it appears as a root process on top on the host (not inside the container).
While it cannot access the host filesystem because of namespacing and cgroups from docker, is it still more dangerous than running as a simple user?
If so, how is the right way of running things inside docker as non root?
Should I just do USER nonroot at the end of the Dockerfile?
UPDATE:
root it also needed for building some things. Should I put USER on the very top of the Dockerfile and then install sudo together with other dependencies, and then use sudo only when needed in the build?
Can someone give a simple Dockerfile example with USER in the beggining and installing and using sudo?
Running the container as root brings a lot of risks. Although being root inside the container is not the same as root on the host machine (some more details here) and you're able to deny a lot of capabilities during container startup, it is still the recommended approach to avoid being root.
Usually it is a good idea to use the USER directive in your Dockerfile after you install some general packages/libraries. In other words - after the operations that require root privileges. Installing sudo in a production service image is a mistake, unless you have a really good reason for it. In most cases - you don't need it and it is more of a security issue. If you need permissions to access some particular files or directories in the image, then make sure that the user you specified in the Dockerfile can really access them (setting proper uid, gid and other options, depending on where you deploy your container). Usually you don't need to create the user beforehand, but if you need something custom, you can always do that.
Here's an example Dockerfile for a Java application that runs under user my-service:
FROM alpine:latest
RUN apk add openjdk8-jre
COPY ./some.jar /app/
ENV SERVICE_NAME="my-service"
RUN addgroup --gid 1001 -S $SERVICE_NAME && \
adduser -G $SERVICE_NAME --shell /bin/false --disabled-password -H --uid 1001 $SERVICE_NAME && \
mkdir -p /var/log/$SERVICE_NAME && \
chown $SERVICE_NAME:$SERVICE_NAME /var/log/$SERVICE_NAME
EXPOSE 8080
USER $SERVICE_NAME
CMD ["java", "-jar", "/app/some.jar"]
As you can see, I create the user beforehand and set its gid, disable its shell and password login, as it is going to be a 'service' user. The user also becomes owner of /var/log/$SERVICE_NAME, assuming it will write to some files there. Now we have a lot smaller attack surface.
Why you shouldn't run as root
While other people have pointed out that you shouldn't run images as root, there isn't much information here, or in the docs about why that is.
While it's true that there is a difference between having root access to a container and root access on the host, root access on a container is still very powerful.
Here is a really good article that goes in depth on the difference between the two, and this issue in general:
https://www.redhat.com/en/blog/understanding-root-inside-and-outside-container
The general point is that if there is a malicious process in your container, it can do whatever it wants in the container, from installing packages, uploading data, hijacking resources, you name it, it can do it.
This also makes it easier for a process to break out of the container and gain privileges on the host since there are no safeguards within the container itself.
How and when to run as non-root
What you want to do is run all your installation and file download/copy steps as root (a lot of things need to be installed as root, and in general it's just a better practice for the reasons I outline below). Then, explicitly create a user and grant that user the minimum level of access that they need to run the application. This is done through the use of chmod and chown commands.
Immediately before your ENTRYPOINT or CMD directive, you then add a USER directive to switch to the newly created user. This will ensure that your application runs as a non-root user, and that user will only have access to what you explicitly gave it access to in previous steps.
The general idea is that the user that runs the container should have an absolute minimum of permissions (most of the time the user doesn't need read, write, and execute access to a file). That way, if there is a malicious process in your container, its behavior will be as restricted as possible. This means that you should avoid creating or copying in any files, or installing any packages as that user too, since they would have complete control over any resources they create by default. I've seen comments suggesting otherwise. Ignore them. If you want to be in line with security best practices, you would then have to go back and revoke the user's excess permissions, and that would just be awful and error prone.
You can check out the CIS benchmark for Docker and they recommend to use non-root and this is one of the "Compliance" checks. Adding USER non-root at the bottom should suffice or you can use '-u' with your RUN command to specify user as well.
https://www.cisecurity.org/benchmark/docker/
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
Running your containers as non-root gives you an extra layer of security. By default, Docker containers are run as root, but this allows for unrestricted container activities.

how to disallow docker cp option

I have a created a docker image for my running environment.
for some reason I need to put some encryption keys in the container since it requires it for it's operation .
is there some way I can block the option to execute docker cp and pull those keys?
thanks
No.
Docker doesn't have any way to selectively limit which commands a user can run. Also, if you can docker run anything at all, you can, for instance, put yourself in the host's /etc/sudoers file and start poking around in /var/lib/docker for *.key files: anyone who can run Docker commands has unrestricted root access to the host.

Permission denied while deploying/activating docker image in Rancher-Kubernetes

I'm deploying the hyperledger/fabric-couchdb docker image on Rancher-Kubernetes. In the cluster, it's not allowed run container as ROOT. So we need select as Nonroot while deploying images.
After deploying hyperledger/fabric-couchdb, the pod is not getting started. When I checked logs, the message is su-exec: setgroups: Operation not permitted. In the below image, I have attached a screenshot from Event as well. Please suggest what needs to done to make it work or am I doing something wrong here.
Event screenshot
That's the problem, you are not running as 'root' and the container entrypoint executes a call to setgroups which requires 'root'. You will have to either run as 'root' somehow or you can modify your container image and the entrypoint to perhaps make those calls where 'root' is require using something like 'sudo'.
Note that whatever user call 'sudo' needs to have 'root' like permissions to execute setgroups

How to run Arangodb on Openshift?

While different database images are available for OpenShift Container Platform users as explained here, others including Arangodb is not yet available. I tried to install Arangodb official container from Dcokerhub by running the following command via Openshift CLI:
oc new-app arangodb
but it does not run successfully throwing the following error:
chown: changing ownership of '/var/lib/arangodb3': Operation not permitted
It is related to permissions. By default, OpenShift runs containers using an arbitrarily assigned user ID and not as the root as documented in Support Arbitrary User IDs section. I tried to chanage the permission of directories and files that may be written to by processes in the image to be owned by the root group and be read/writable by that group in the Dockerfile:
RUN chgrp -R 0 /some/directory \
&& chmod -R g+rwX /some/directory
This time it throws the following error:
FATAL cannot set uid 'arangodb': Operation not permitted
By looking at the script that thatinitializes arangodb (arangod script), arangodb runs as arangodb:arangodb, which should (or may !!!) be arangodb:0 in the case of Openshift.
Now, I am really confused. I've read and searched a lot:
Getting any Docker image running in your own OpenShift
cluster
User namespaces have arrived in
Docker!
new-app fails on some official Docker images due to chown
permissions
I also tried doing the reverse engineering by looking at mongodb
image
provided by openshift. But at the end, I got more confused.
I also do not want to ask cluster administrators to allow the project to run as root using:
# oadm policy add-scc-to-user anyuid -z default
Th more I read, the more I get confused. Has anybody done it before that can provide me a docker container I can run on Openshift?
With ArangoDB 3.4 the docker image has been migrated to an alpine based Image, and its core now shouldn't invoke CHOWN/CHRGRP anymore when invoked in the right way.
This should be one of the requirements to get it working on Openshift.
If you still have problems running ArangoDB on openshift, use the github issue tracker with the specific problems you see. You may also want to try to add changes to the dockerfile, so it can be improved.

Resources