I'm deploying the hyperledger/fabric-couchdb docker image on Rancher-Kubernetes. In the cluster, it's not allowed run container as ROOT. So we need select as Nonroot while deploying images.
After deploying hyperledger/fabric-couchdb, the pod is not getting started. When I checked logs, the message is su-exec: setgroups: Operation not permitted. In the below image, I have attached a screenshot from Event as well. Please suggest what needs to done to make it work or am I doing something wrong here.
Event screenshot
That's the problem, you are not running as 'root' and the container entrypoint executes a call to setgroups which requires 'root'. You will have to either run as 'root' somehow or you can modify your container image and the entrypoint to perhaps make those calls where 'root' is require using something like 'sudo'.
Note that whatever user call 'sudo' needs to have 'root' like permissions to execute setgroups
Related
I'm following https://www.redhat.com/sysadmin/podman-inside-container, targeting rootless Podman running a rootless Podman container. podman info works, but shows the warning message
WARN[0000] "/" is not a shared mount, this could cause issues or
missing mounts with rootless containers
mount --make-rshared /, run as root, solves that. My question is, how I can prepare my image so that I don't need root privileges everytime I run the image.
I tried RUN mount --make-rshared / in the Dockerfile. That failed with "Permission denied", which confuses me because permisison is not denied for everything else, like yum, useradd and file access.
I also tried to put the command in an init script, saved it in /etc/init.d and added it with chkconfig. It doesn't seem to be executed when I run the image, although I added all runlevels: 123456 . I can execute it directly, but of course only as root.
I'm rather new to Docker/Podman. There are some other question marks for me, like the missing shell commands login and runlevel. The important goal is to get rid of the warning message though.
While different database images are available for OpenShift Container Platform users as explained here, others including Arangodb is not yet available. I tried to install Arangodb official container from Dcokerhub by running the following command via Openshift CLI:
oc new-app arangodb
but it does not run successfully throwing the following error:
chown: changing ownership of '/var/lib/arangodb3': Operation not permitted
It is related to permissions. By default, OpenShift runs containers using an arbitrarily assigned user ID and not as the root as documented in Support Arbitrary User IDs section. I tried to chanage the permission of directories and files that may be written to by processes in the image to be owned by the root group and be read/writable by that group in the Dockerfile:
RUN chgrp -R 0 /some/directory \
&& chmod -R g+rwX /some/directory
This time it throws the following error:
FATAL cannot set uid 'arangodb': Operation not permitted
By looking at the script that thatinitializes arangodb (arangod script), arangodb runs as arangodb:arangodb, which should (or may !!!) be arangodb:0 in the case of Openshift.
Now, I am really confused. I've read and searched a lot:
Getting any Docker image running in your own OpenShift
cluster
User namespaces have arrived in
Docker!
new-app fails on some official Docker images due to chown
permissions
I also tried doing the reverse engineering by looking at mongodb
image
provided by openshift. But at the end, I got more confused.
I also do not want to ask cluster administrators to allow the project to run as root using:
# oadm policy add-scc-to-user anyuid -z default
Th more I read, the more I get confused. Has anybody done it before that can provide me a docker container I can run on Openshift?
With ArangoDB 3.4 the docker image has been migrated to an alpine based Image, and its core now shouldn't invoke CHOWN/CHRGRP anymore when invoked in the right way.
This should be one of the requirements to get it working on Openshift.
If you still have problems running ArangoDB on openshift, use the github issue tracker with the specific problems you see. You may also want to try to add changes to the dockerfile, so it can be improved.
I have a synology NAS which has docker support and wanted to run some docker containers (I'm pretty new to Docker) on it. For example pocketmine-pm (but I believe I have the write issue also with other containers).
I created a volume on the host and mapped this in the container settings. (And in the synology docker settings for the volume mapping I did not click on "read only").
According to the Dockerfile a new user 'pocketmine' is created inside the container and this user is used to start the server. The user seems to have the user ID 1000 (first UID for new linux users). The container also uses an Entrypoint.sh script to start the server.
Initially the container was not able to write files to the mapped directory. I had to SSH into the host 'chown' the directory for the UID 1000:
sudo chown 1000:1000 /volXy/docker/pocketminemp -R
After that the archive could be downloaded and extracted.
Unfortunately I was not able to connect to the server from my iOS device. The server is listed as 'online' but the connection fails without any specific message. I then checked the logs of the container and saw the following entries (not sure if this really prevents the connection but I will give it a try):
[*] Everything done! Run ./start.sh to start PocketMine-MP
chown: changing ownership of '/pocketmine/entrypoint.sh': Operation not permitted
chown: changing ownership of '/pocketmine/server.properties.original': Operation not permitted
Loading pocketmine.yml...
Apparently the container cannot chown a file it was previously able to download.
Does anybody know what can be done to fix this? Do I need to chmod the mapped volume and why did I need to chown the directory to UID 1000 (a user that doesn't really exist on the host) - isn't there a more elegant way to fix the permissions?
When you run the container, you should be able to use the --user="uid:gid" flag to specify the user you wish to run the container as.
Source: https://docs.docker.com/engine/reference/run/#user
I am currently trying to deploy a basic task queue and frontend using celery, rabbitmq and flower on Kubernetes (and minikube). I am following the example here:
https://github.com/kubernetes/kubernetes/tree/release-1.3/examples/celery-rabbitmq
I can get everything to work following the instructions; however, when I run docker build on the Dockerfile in ./celery-app-add, push the image to my own repository and replace endocode/celery-app-add with <mine>/celery-app-add, I can't get the example to run anymore. I am assuming that the Dockerfile in source control is wrong because if I pull the endocode/celery-app-add image and run bash in the image, it loads in as the root user (as opposed to user with <mine>/celery-app-add Dockerfile).
After booting up all of the containers and services, I can see the following in the logs:
2016-08-18T21:05:44.846591547Z AttributeError: 'ChannelPromise' object has no attribute '__value__'
The celery logs show:
2016-08-19T01:38:49.933659218Z [2016-08-19 01:38:49,933: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#rabbit:5672//: [Errno -2] Name or service not known.
If I echo RABBITMQ_SERVICE_SERVICE_HOST within the container, it appears as the same host as indicated in the rabbitmq-service after running kubectl get services.
I am not really sure where to go from here. Any suggestions are appreciated. Also, I added USER root (won't run this in production, don't worry) to my Dockerfile and still ran into the same issues above. docker history endocode/celery-app-add hasn't been too helpful either.
Turns out the problem is based around this celery issue. Celery prefers to use CELERY_BROKER_URL over anything that can be set in the app configuration. To fix this, I unset CELERY_BROKER_URL in the Dockerfile and it picked up my configuration correctly.
Docker kind of always had a USER command to run a process as a specific user, but in general a lot of things had to run as ROOT.
I have seen a lot of images that use an ENTRYPOINT with gosu to de-elevate the process to run.
I'm still a bit confused about the need for gosu. Shouldn't USER be enough?
I know quite a bit has changed in terms of security with Docker 1.10, but I'm still not clear about the recommended way to run a process in a docker container.
Can someone explain when I would use gosu vs. USER?
Thanks
EDIT:
The Docker best practice guide is not very clear: It says if the process can run without priviledges, use USER, if you need sudo, you might want to use gosu.
That is confusing because one can install all sorts of things as ROOT in the Dockerfile, then create a user and give it proper privileges, then finally switch to that user and run the CMD as that user.
So why would we need sudo or gosu then?
Dockerfiles are for creating images. I see gosu as more useful as part of a container initialization when you can no longer change users between run commands in your Dockerfile.
After the image is created, something like gosu allows you to drop root permissions at the end of your entrypoint inside of a container. You may initially need root access to do some initialization steps (fixing uid's, host mounted volume permissions, etc). Then once initialized, you run the final service without root privileges and as pid 1 to handle signals cleanly.
Edit:
Here's a simple example of using gosu in an image for docker and jenkins: https://github.com/bmitch3020/jenkins-docker
The entrypoint.sh looks up the gid of the /var/lib/docker.sock file and updates the gid of the docker user inside the container to match. This allows the image to be ported to other docker hosts where the gid on the host may differ. Changing the group requires root access inside the container. Had I used USER jenkins in the dockerfile, I would be stuck with the gid of the docker group as defined in the image which wouldn't work if it doesn't match that of the docker host it's running on. But root access can be dropped when running the app which is where gosu comes in.
At the end of the script, the exec call prevents the shell from forking gosu, and instead it replaces pid 1 with that process. Gosu in turn does the same, switching the uid and then exec'ing the jenkins process so that it takes over as pid 1. This allows signals to be handled correctly which would otherwise be ignored by a shell as pid 1.
I am using gosu and entrypoint.sh because I want the user in the container to have the same UID as the user that created the container.
Docker Volumes and Permissions.
The purpose of the container I am creating is for development. I need to build for linux but I still want all the connivence of local (OS X) editing, tools, etc. My keeping the UIDs the same inside and outside the container it keeps the file ownership a lot more sane and prevents some errors (container user cannot edit files in mounted volume, etc)
Advantage of using gosu is also signal handling. You may trap for instance SIGHUP for reloading the process as you would normally achieve via systemctl reload <process> or such.