Docker container failed because x.cert permission denied - docker

I am trying to add SSL certificate and key to Docker container to use in it. I do not want to use the COPY Dockerfile command, instead, I used the "Bind mount a volume" as follows
docker run -p 443:443 -v grafana-storage:/var/lib/grafana -v /etc/ssl/certs/platform-loc/x.crt:/etc/grafana/x.crt -v /etc/ssl/certs/platform-loc/x.key:/etc/grafana/x.key -e "GF_INSTALL_PLUGINS=yesoreyeram-boomtable-panel" grafana_app
but the previous command failed with the following errors
t=2019-08-28T17:33:40+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=0.0.0.0:443 protocol=https subUrl= socket=
t=2019-08-28T17:33:40+0000 lvl=eror msg="Stopped HTTPServer" logger=server reason="open /etc/grafana/x.crt: permission denied"
t=2019-08-28T17:33:40+0000 lvl=info msg="Stopped provisioningServiceImpl" logger=server reason="context canceled"
t=2019-08-28T17:33:40+0000 lvl=eror msg="Server shutdown" logger=server reason="open /etc/grafana/x.crt: permission denied"
and this is the content of my Dockerfile
FROM grafana/grafana
COPY config /config
USER root
RUN apt-get update && apt-get install -y vim
RUN cp /config/x.toml /etc/grafana/x.toml &&\
cp /config/grafana.ini /etc/grafana/grafana.ini
ENTRYPOINT [ "/run.sh" ]
Could someone please help me to fix this?

When the container is launched all files inherit their owner, group and file mask from the host OS.
For that certificate it's probably root:root (or 0:0), readable only by the user. Inside the container the user is grafana (with id 472).
> docker run -it --rm --entrypoint bash grafana/grafana
grafana#8edd34dc044d:/usr/share/grafana$ whoami
grafana
grafana#8edd34dc044d:/usr/share/grafana$ grep grafana /etc/passwd
grafana:x:472:472::/home/grafana:/bin/sh
So, user grafana can't read the file owned by root.
You could change the permissions on the file to be readable to all, that would solve the problem but at the same time compromise that file on the host.
Or, you could change the user in your image to root but that is considered bad practice.
What solution you choose is up to you. Perhaps this certificate is fine to have world read able.

I inspired by Grafana docs to find the answer, basically I had to run
chown 472:472 x.*
and the problem is solved now
Doc: https://grafana.com/docs/installation/docker/

Related

Use ~/.ssh/config hosts in docker context

I'm trying to find a way to use hosts defined in my user's ~/.ssh/config file to define a docker context.
My ~/.ssh/config file contains:
Host my-server
HostName 10.10.10.10
User remoteuser
IdentityFile /home/me/.ssh/id_rsa-mykey.pub
IdentitiesOnly yes
I'd like to create a docker context as follow:
docker context create \
--docker host=ssh://my-server \
--description="remoteuser on 10.10.10.10" \
my-server
Issuing the docker --context my-server ps command throws an error stating:
... please make sure the URL is valid ... Could not resolve hostname my-server: Name or service not known
For what I could figure out, the docker command uses the sudo mechanism to elevate its privileges. Thus I guess it searches /root/.ssh/config, since ssh doesn't use the $HOME variable.
I tried to symlink the user's config as the root one:
sudo ln -s /home/user/.ssh/config /root/.ssh/config
But this throws another error:
... please make sure the URL is valid ... Bad owner or permissions on /home/user/.ssh/config
The same happens when creating the /root/.ssh/config file simply containing:
Include /home/*/.ssh/config
Does someone have an idea on how to have my user's .ssh/config file parsed by ssh when issued via sudo ?
Thank you.
Have you confirmed your (probably correct) theory that docker is running as root, by just directly copying your user's ~/.ssh/config contents into /root/.ssh/config? If that doesn't work, you're back to square one...
Otherwise, either the symlink or the Include ought to work just fine (a symlink inherits the permissions of the file it is pointing at).
Another possibility is that your permissions actually are bad -- don't forget you have to change the permissions on both ~/.ssh AND ~/.ssh/config.
chmod 700 /home/user/.ssh
chmod 600 /home/user/.ssh/config
And maybe even:
chmod 700 /root/.ssh
chmod 600 /root/.ssh/config

Eclipse Mosquitto Docker: Unable to open log file /opt/mosquitto/log/mosquitto.log for writing

I want to run the eclipse-mosquitto mqtt server in a docker on a RPi.
The command I am using to run it is:
docker run --name mqtt --restart=always --net=host -tid -u 1883:1883 -v /opt/mosquitto/config:/mosquitto/config:ro -v /opt/mosquitto/log:/mosquitto/log:rw -v /opt/mosquitto/data/:/mosquitto/data/:rw eclipse-mosquitto
When starting up the server, I am getting the following error message:
1615232346: Error: Unable to open log file /opt/mosquitto/log/mosquitto.log for writing.
Also from time to time I am getting the following error in the docker logs:
1615241350: Error: No such file or directory.
I assume this one is for the unwriteable data directory.
My mosquitto user looks like this:
The rights to the folders in /opt/mosquitto/ looke like this:
I even changed the access rights for the mosquitto.log to 777:
Unfortunately I am still getting the error. The server is up and running though, but I cannot access the logs and nothing can be written in the data directory.
I also already checked multiple solutions (e.g. https://github.com/eclipse/mosquitto/issues/909), but nothing has worked so far.
Can you help me out how to solve this?
I had the same issue. I solved it like this:
first I checked the files' default permission(README) in the ca_certificates and certs folder. It was -rw-r--r-- (644). So I set the all certs files permissions.
sudo chmod 0644 ./ca_certificates/* ./certs/*
and also folders' permissions. they were drwxr-xr-x (755)
sudo chmod 0755 ./ca_certificates ./certs

Docker permission denied with volume

I'm trying to start a Nginx container that serve static content located on the host, in /opt/content.
The container is started with :
docker run -p 8080:80 -v /opt/content:/usr/share/nginx/html nginx:alpine
And Nginx keeps giving me 403 Forbidden. Moreover, when trying to inspect the content of the directory, I got strange results :
$ $ docker exec -i -t inspiring_wing /bin/sh
/ # ls -l /usr/share/nginx/
total 4
drwxrwxrwx 3 root root 4096 Aug 15 08:08 html
/ # ls -l /usr/share/nginx/html/
ls: can't open '/usr/share/nginx/html/': Permission denied
total 0
I chmod -R 777 /opt/ to be sure there are no restriction on the host, but it doesn't change anything. I also try to add :ro flag to the volume option with no luck.
How can I make the mounted volume readable by the container ?
UPDATE : here are the full steps I done to reproduce this problem (as root, and with another directory to start from a clean config) :
mkdir /public
echo "Hello World" > /public/index.html
chmod -R 777 /public
docker run -p 8080:80 -d -v /public:/usr/share/nginx/html nginx:alpine
docker exec -i -t inspiring_wing /bin/sh
ls -l /usr/share/nginx/html
And this last command inside the container returns me : ls -l /usr/share/nginx/html. Of course, replace inspiring_wing by the name of the created container.
The problem was caused by SELinux that prevented Docker to access the file system.
If someone has the same problem than this post, here is how to check if it's the same situation :
1/ Check SELinux status: sestatus. If the mode is enforcing, it may block Docker to access filesystem.
# sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Memory protection checking: actual (secure)
Max kernel policy version: 31
2/ Change mode to permissive: setenforce 0. There should be no more restrictions on Docker.
You're copying from /opt/content on the host, to /usr/share/nginx/html in the container. So when you log in, you want to look in /usr/share/nginx/html for the files.
If this doesn't help enough, you can paste the content of ls -lah /usr/share/nginx/html but I think you just don't have an index page in there.
Instead of setting SELinux to permissive on your host entirely, I would recommend setting the correct security context for your volume source:
chcon -R -t container_file_t PATHTOHOSTDIR

Can not run metricbeat in docker

I am trying to run metricbeat using docker in windows machine and I have changed metricbeat.yml as per my requirement.
docker run -v /c/Users/someuser/docker/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml docker.elastic.co/beats/metricbeat:5.6.0
but getting these error
metricbeat2017/09/17 10:13:19.285547 beat.go:346: CRIT Exiting: error
loading config file: config file ("metricbeat.yml") can only be
writable by the owner but the permissions are "-rwxrwxrwx" (to fix the
permissions use: 'chmod go-w /usr/share/metricbeat/metricbeat.yml')
Exiting: error loading config file: config file ("metricbeat.yml") can only be writable by the owner but the permissions are "-rwxrwxrwx"
(to fix the permissions use: 'chmod go-w /
usr/share/metricbeat/metricbeat.yml')
Why I am getting this?
What is the right way to make permanent change in file content in docker container (As I don't want to change configuration file each time when container start)
Edit:
Container is not meant to be edited / changed.If necessary, docker volume management is available to externalize all configuration related works.Thanks
So there are 2 options you can do here I think.
The first is that you can ensure the file has the proper permissions:
chmod 644 metricbeat.yml
Or you can run your docker command with -strict.perms=false which flags that metricbeat shouldn't care about what permissions are on the metricbeat.yml file.
docker run \
docker.elastic.co/beats/metricbeat:5.6.0 \
--volume="/c/Users/someuser/docker/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml" \
-strict.perms=false
You can see more documentation about that flag in the link below:
https://www.elastic.co/guide/en/beats/metricbeat/current/command-line-options.html#global-flags

How do you run an Openshift Docker container as something besides root?

I'm currently running Openshift, but I am running into a problem when I try to build/deploy my custom Docker container. The container works properly on my local machine, but once it gets built in openshift and I try to deploy it, I get the error message. I believe the problem is because I am trying to run commands inside of the container as root.
(13)Permission denied: AH00058: Error retrieving pid file /run/httpd/httpd.pid
My Docker file that I am deploying looks like this -
FROM centos:7
MAINTAINER me<me#me>
RUN yum update -y
RUN yum install -y git https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y ansible && yum clean all -y
RUN git clone https://github.com/dockerFileBootstrap.git
RUN ansible-playbook "-e edit_url=andrewgarfield edit_alias=emmastone site_url=testing.com" dockerAnsible/dockerFileBootstrap.yml
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
COPY supervisord.conf /usr/etc/supervisord.conf
RUN rm -rf supervisord.conf
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 80 443
#CMD ["/usr/bin/supervisord"]
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Ive run into a similar problem multiple times where it will say things like Permission Denied on file /supervisord.log or something similar.
How can I set it up so that my container doesnt run all of the commands as root? It seems to be causing all of the problems that I am having.
Openshift has strictly security policy regarding custom Docker builds.
Have a look a this OpenShift Application Platform
In particular at point 4 into the FAQ section, here quoted.
4. Why doesn't my Docker image run on OpenShift?
Security! Origin runs with the following security policy by default:
Containers run as a non-root unique user that is separate from other system users
They cannot access host resources, run privileged, or become root
They are given CPU and memory limits defined by the system administrator
Any persistent storage they access will be under a unique SELinux label, which prevents others from seeing their content
These settings are per project, so containers in different projects cannot see each other by default
Regular users can run Docker, source, and custom builds
By default, Docker builds can (and often do) run as root. You can control who can create Docker builds through the builds/docker and builds/custom policy resource.
Regular users and project admins cannot change their security quotas.
Many Docker containers expect to run as root (and therefore edit all the contents of the filesystem). The Image Author's guide gives recommendations on making your image more secure by default:
Don't run as root
Make directories you want to write to group-writable and owned by group id 0
Set the net-bind capability on your executables if they need to bind to ports <1024
Otherwise, you can see the security documentation for descriptions on how to relax these restrictions.
I hope it helps.
Although you don't have access to root, your OpenShift container, by default, is a member of the root group. You can change some dir/file permissions to avoid the Permission Denied errors.
If you're using a Dockerfile to deploy an image to OpenShift, you can add the following RUN command to your Dockerfile:
RUN chgrp -R 0 /run && chmod -R g=u /run
This will change the group for everything in the /run directory to the root group and then set the group permission on all files to be equivalent to the owner (group equals user) of the file. Essentially, any user in the root group has the same permissions as the owner for every file.
You can run docker as any user , also root (and not Openshift default build-in account UID - 1000030000 when issuing this two commands in sequence on command line oc cli tools
oc login -u system:admin -n default following with oc adm policy add-scc-to-user anyuid -z default -n projectname where projectname is name of your project inside which you assigned under your docker

Resources