Share using NFS a volume mounted with gcsfuse in Kubernetes - docker

I've been trying to mount a Google Cloud bucket inside a pod in our (onprem) cluster in order to share that mounted volume using NFS to other pods and PersistentVolumes.
Here there are the configurations:
#!/bin/bash
_start_nfs() {
exportfs -a
rpcbind
rpc.statd
rpc.nfsd
rpc.mountd
GOOGLE_APPLICATION_CREDENTIALS=/accounts/key.json gcsfuse -o allow_other --dir-mode 777 --uid 1500 --gid 1500 ${BUCKET} /exports
}
_nfs_server_mounts() {
IFS=':' read -r -a MNT_SERVER_ARRAY <<< "$NFS_SERVER_DIRS"
for server_mnt in "${MNT_SERVER_ARRAY[#]}"; do
if [[ ! -d $server_mnt ]]; then
mkdir -p $server_mnt
fi
chmod -R 777 $server_mnt
cat >> /etc/exports <<EOF
${server_mnt} *(rw,sync,no_subtree_check,all_squash,anonuid=1500,anongid=1500,fsid=$(( ( RANDOM % 100 ) + 200 )))
EOF
done
cat /etc/exports
}
_sysconfig_nfs() {
cat > /etc/sysconfig/nfs <<EOF
#
#
# To set lockd kernel module parameters please see
# /etc/modprobe.d/lockd.conf
#
# Optional arguments passed to rpc.nfsd. See rpc.nfsd(8)
RPCNFSDARGS=""
# Number of nfs server processes to be started.
# The default is 8.
RPCNFSDCOUNT=${RPCNFSDCOUNT}
#
# Set V4 grace period in seconds
#NFSD_V4_GRACE=90
#
# Set V4 lease period in seconds
#NFSD_V4_LEASE=90
#
# Optional arguments passed to rpc.mountd. See rpc.mountd(8)
RPCMOUNTDOPTS=""
# Port rpc.mountd should listen on.
#MOUNTD_PORT=892
#
# Optional arguments passed to rpc.statd. See rpc.statd(8)
STATDARG=""
# Port rpc.statd should listen on.
#STATD_PORT=662
# Outgoing port statd should used. The default is port
# is random
#STATD_OUTGOING_PORT=2020
# Specify callout program
#STATD_HA_CALLOUT="/usr/local/bin/foo"
#
#
# Optional arguments passed to sm-notify. See sm-notify(8)
SMNOTIFYARGS=""
#
# Optional arguments passed to rpc.idmapd. See rpc.idmapd(8)
RPCIDMAPDARGS=""
#
# Optional arguments passed to rpc.gssd. See rpc.gssd(8)
# Note: The rpc-gssd service will not start unless the
# file /etc/krb5.keytab exists. If an alternate
# keytab is needed, that separate keytab file
# location may be defined in the rpc-gssd.service's
# systemd unit file under the ConditionPathExists
# parameter
RPCGSSDARGS=""
#
# Enable usage of gssproxy. See gssproxy-mech(8).
GSS_USE_PROXY="yes"
#
# Optional arguments passed to blkmapd. See blkmapd(8)
BLKMAPDARGS=""
EOF
}
### main ###
_sysconfig_nfs
_nfs_server_mounts
_start_nfs
rpcinfo -p
showmount -e
tail -f /dev/null
The Dockerfile:
FROM centos:7
RUN yum -y install /usr/bin/ps nfs-utils nfs4-acl-tools curl portmap fuse nfs-utils && yum clean all
RUN mkdir -p /exports
ENV RPCNFSDCOUNT=8 \
NFS_SERVER_DIRS='/usr'
RUN chown nfsnobody:nfsnobody /exports
RUN chmod 777 /exports
ADD setup.sh /usr/local/bin/run_nfs.sh
RUN chmod +x /usr/local/bin/run_nfs.sh
RUN useradd -u 1500 orenes
ADD gcsfuse-0.41.6-1.x86_64.rpm /data/gcsfuse-0.41.6-1.x86_64.rpm
RUN yum install -y /data/gcsfuse-0.41.6-1.x86_64.rpm
# Expose volume
VOLUME ["/exports"]
# expose mountd 20048/tcp and nfsd 2049/tcp and rpcbind 111/tcp
EXPOSE 2049/tcp 20048/tcp 111/tcp 111/udp
ENTRYPOINT ["/usr/local/bin/run_nfs.sh"]
Let's assume that there's a PersistentVolume mounting a Service pointing to the NFS server I described before.
So far I got access denied, unable to receive, etc...
Should I move to a SMB/CIFS sharing? Is there any solution on this? Thanks in advance.

It seems that FUSE and NFS don't get well along. I ended up using SMB and it works flawlessly but you must use the Service ClusterIP instead of Kubernetes's DNS.
The comment from #AviD about using a CSI is so helpful if you need to make it quick.
Again thanks to all.

Related

How to solve file permission issues when developing with Apache HTTP server in Docker?

My Dockerfile extends from php:8.1-apache. The following happens while developing:
The application creates log files (as www-data, 33:33)
I create files (as the image's default user root, 0:0) within the container
These files are mounted on my host where I'm acting as user (1000:1000). Of course I'm running into file permission issues now. I'd like to update/delete files created in the container on my host and vice versa.
My current solution is to set the image's user to www-data. In that way, all created files belong to it. Then, I change its user and group id from 33 to 1000. That solves my file permission issues.
However, this leads to another problem:
I'm prepending sudo -E to the entrypoint and command. I'm doing that because they're normally running as root and my custom entrypoint requires root permissions. But in that way the stop signal stops working and the container has to be killed when I want it to stop:
~$ time docker-compose down
Stopping test_app ... done
Removing test_app ... done
Removing network test_default
real 0m10,645s
user 0m0,167s
sys 0m0,004s
Here's my Dockerfile:
FROM php:8.1-apache AS base
FROM base AS dev
COPY entrypoint.dev.sh /usr/local/bin/custom-entrypoint.sh
ARG user_id=1000
ARG group_id=1000
RUN set -xe \
# Create a home directory for www-data
&& mkdir --parents /home/www-data \
&& chown --recursive www-data:www-data /home/www-data \
# Make www-data's user and group id match my host user's ones (1000 and 1000)
&& usermod --home /home/www-data --uid $user_id www-data \
&& groupmod --gid $group_id www-data \
# Add sudo and let www-data execute it without asking for a password
&& apt-get update \
&& apt-get install --yes --no-install-recommends sudo \
&& rm --recursive --force /var/lib/apt/lists/* \
&& echo "www-data ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/www-data
USER www-data
# Run entrypoint and command as sudo, as my entrypoint does some config substitution and both normally run as root
ENTRYPOINT [ "sudo", "-E", "custom-entrypoint.sh" ]
CMD [ "sudo", "-E", "apache2-foreground" ]
Here's my custom-entrypoint.sh
#!/bin/sh
set -e
sed --in-place 's#^RemoteIPTrustedProxy.*#RemoteIPTrustedProxy '"$REMOTEIP_TRUSTED_PROXY"'#' $APACHE_CONFDIR/conf-available/remoteip.conf
exec docker-php-entrypoint "$#"
What do I need to do to make the container catch the stop signal (it is SIGWINCH for the Apache server) again? Or is there a better way to handle the file permission issues, so I don't need to run the entrypoint and command with sudo -E?
What do I need to do to make the container catch the stop signal (it is SIGWINCH for the Apache server) again?
First, get rid of sudo, if you need to be root in your container, run it as root with USER root in your Dockerfile. There's little value add to sudo in the container since it should be an environment to run one app and not a multi-user general purpose Linux host.
Or is there a better way to handle the file permission issues, so I don't need to run the entrypoint and command with sudo -E?
The pattern I go with is to have developers launch the container as root, and have the entrypoint detect the uid/gid of the mounted volume, and adjust the uid/gid of the user in the container to match that id before running gosu to drop permissions and run as that user. I've included a lot of this logic in my base image example (note the fix-perms script that tweaks the uid/gid). Another example of that pattern is in my jenkins-docker image.
You'll still need to either configure root's login shell to automatically run gosu inside the container, or remember to always pass -u www-data when you exec into your image, but now that uid/gid will match your host.
This is primarily for development. In production, you probably don't want host volumes, use named volumes instead, or at least hardcode the uid/gid of the user in the image to match the desired id on the production hosts. That means the Dockerfile would still have USER www-data but the docker-compose.yml for developers would have user: root that doesn't exist in the compose file in production. You can find a bit more on this in my DockerCon 2019 talk (video here).
You can use user namespace to map different user/group in your docker to you on the host.
For example, the group www-data/33 in the container could be the group docker-www-data/100033 on the host, you just have be in the group to access log files.

How to build and deploy pgadmin to OpenShift?

I want to deploy pgadmin in my OpenShift namespace, but when I am deploying the default pgadmin image from docker.hub, I have an error:
/entrypoint.sh: line 8: can't create /pgadmin4/config_distro.py: Permission denied
You need to define the PGADMIN_DEFAULT_EMAIL and PGADMIN_DEFAULT_PASSWORD environment variables.
But, I can't access full priveleges OpenShift, i am not admin cluster, i can't do oc adm policy add-scc.
I tried to create Dockerfile
FROM dpage/pgadmin4 as pgadmin4
USER root
RUN chown 1000640000:1000640000 /pgadmin4 && \
sed -i 's/5050/1000720000/g' /etc/passwd && \
sed -i 's/5050/1000720000/g' /etc/group && \
find / -user 5050 -exec chown 1000720000 {} \; && \
find / -group 5050 -exec chown :1000720000 {} \; && \
sed 's#python /run_pgadmin.py#python /pgadmin4/run_pgadmin.py#g' /entrypoint.sh
USER 1000640000
VOLUME /var/lib/pgadmin
EXPOSE 80 443
ENTRYPOINT ["/entrypoint.sh"]
And if i docker build and deploy to OpenShift, I will still have an error
Maybe there is some other way to bypass the ban and install pgadmin?
Do what #rzlvmp advised you. Also do not EXPOSE 80 443. In Openshift if you do not have a cluster-admin role to perform oc adm policy add-scc which even if you have it, it is not the good idea to dealing with scc anyhow, you are not able to expose port lower than 1024. Use EXPOSE 8080 8443 for example. Only process started under root UID could use port lower then 1024.
File permissions work better in OpenShift if you recognize your container runs with GROUPID=0 instead of trying to fine tune for the uid.
Instead of embedding that high uid in your container image, chgrp 0, and chmod g+rwx

Node Manager is reachable but the managed server is not reachable

I'm facing a problem when configuring multi-host environment on docker machine. I have two machines Master and Worker. The worker has node manager running and is reachable from the Master. But the managed server running on the worker machine is not reachable.
Can anyone help me through this.
Docker file:
# LICENSE CDDL 1.0 + GPL 2.0
#
# Copyright (c) 2014-2015 Oracle and/or its affiliates. All rights reserved.
#
# ORACLE DOCKERFILES PROJECT
# --------------------------
# This Dockerfile extends the Oracle WebLogic image by creating a sample domain.
#
# The 'base-domain' created here has Java EE 7 APIs enabled by default:
# - JAX-RS 2.0 shared lib deployed
# - JPA 2.1,
# - WebSockets and JSON-P
#
# Util scripts are copied into the image enabling users to plug NodeManager
# magically into the AdminServer running on another container as a Machine.
#
# HOW TO BUILD THIS IMAGE
# -----------------------
# Put all downloaded files in the same directory as this Dockerfile
# Run:
# $ sudo docker build -t 1213-domain --build-arg ADMIN_PASSWORD=welcome1 .
#
# Pull base image
# ---------------
FROM oracle/weblogic:12.1.3-developer
# Maintainer
# ----------
MAINTAINER Bruno Borges <bruno.borges#oracle.com>
# WLS Configuration
# -------------------------------
ARG ADMIN_PASSWORD
ENV DOMAIN_NAME="base_domain" \
DOMAIN_HOME="/u01/oracle/user_projects/domains/base_domain" \
JAVA_OPTIONS="${JAVA_OPTIONS} -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 -server -XX:PermSize=512m -XX:MaxPermSize=512m -Dapplication.properties.location=/u01/oracle/user_projects/domains/base_domain -DFundCenterProperties.Location=/u01/oracle/user_projects/domains/base_domain/FundCenterProperties.xml -Dlog4j.configuration=file:/u01/oracle/user_projects/domains/base_domain/log4j.properties" \
ADMIN_PORT="7007" \
ADMIN_HOST="wlsadmin" \
NM_PORT="5556" \
MS_PORT="7003" \
CONFIG_JVM_ARGS="-Dweblogic.security.SSL.ignoreHostnameVerification=true" \
PATH=$PATH:/u01/oracle/oracle_common/common/bin:/u01/oracle/wlserver/common/bin:/u01/oracle/user_projects/domains/base_domain/bin:/u01/oracle
# Add files required to build this image
USER root
COPY container-scripts/* /u01/oracle/
USER oracle
# Configuration of WLS Domain
WORKDIR /u01/oracle
RUN /u01/oracle/wlst /u01/oracle/create-wls-domain.py && \
mkdir -p /u01/oracle/user_projects/domains/base_domain/servers/AdminServer/security && \
echo "username=weblogic" > /u01/oracle/user_projects/domains/base_domain/servers/AdminServer/security/boot.properties && \
echo "password=$ADMIN_PASSWORD" >> /u01/oracle/user_projects/domains/base_domain/servers/AdminServer/security/boot.properties && \
echo ". /u01/oracle/user_projects/domains/base_domain/bin/setDomainEnv.sh" >> /u01/oracle/.bashrc && \
echo "export PATH=$PATH:/u01/oracle/wlserver/common/bin:/u01/oracle/user_projects/domains/base_domain/bin" >> /u01/oracle/.bashrc && \
cp /u01/oracle/commEnv.sh /u01/oracle/wlserver/common/bin/commEnv.sh && \
rm /u01/oracle/create-wls-domain.py /u01/oracle/jaxrs2-template.jar
COPY log4j.properties /u01/oracle/user_projects/domains/base_domain/
EXPOSE $NM_PORT $ADMIN_PORT $MS_PORT 5005
# Expose Node Manager default port, and also default http/https ports for admin console
WORKDIR $DOMAIN_HOME
# Define default command to start bash.
CMD ["startWebLogic.sh"]
Docker run command:
docker run -d --name webapp -p 7007:7007 -p 7003:7003 -p 5556:5556 -v /c/webapp:/webapp 1213-domain
By the way I'm using weblogic 12.1.3 image for this deployment.

How do you run an Openshift Docker container as something besides root?

I'm currently running Openshift, but I am running into a problem when I try to build/deploy my custom Docker container. The container works properly on my local machine, but once it gets built in openshift and I try to deploy it, I get the error message. I believe the problem is because I am trying to run commands inside of the container as root.
(13)Permission denied: AH00058: Error retrieving pid file /run/httpd/httpd.pid
My Docker file that I am deploying looks like this -
FROM centos:7
MAINTAINER me<me#me>
RUN yum update -y
RUN yum install -y git https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y ansible && yum clean all -y
RUN git clone https://github.com/dockerFileBootstrap.git
RUN ansible-playbook "-e edit_url=andrewgarfield edit_alias=emmastone site_url=testing.com" dockerAnsible/dockerFileBootstrap.yml
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
COPY supervisord.conf /usr/etc/supervisord.conf
RUN rm -rf supervisord.conf
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 80 443
#CMD ["/usr/bin/supervisord"]
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Ive run into a similar problem multiple times where it will say things like Permission Denied on file /supervisord.log or something similar.
How can I set it up so that my container doesnt run all of the commands as root? It seems to be causing all of the problems that I am having.
Openshift has strictly security policy regarding custom Docker builds.
Have a look a this OpenShift Application Platform
In particular at point 4 into the FAQ section, here quoted.
4. Why doesn't my Docker image run on OpenShift?
Security! Origin runs with the following security policy by default:
Containers run as a non-root unique user that is separate from other system users
They cannot access host resources, run privileged, or become root
They are given CPU and memory limits defined by the system administrator
Any persistent storage they access will be under a unique SELinux label, which prevents others from seeing their content
These settings are per project, so containers in different projects cannot see each other by default
Regular users can run Docker, source, and custom builds
By default, Docker builds can (and often do) run as root. You can control who can create Docker builds through the builds/docker and builds/custom policy resource.
Regular users and project admins cannot change their security quotas.
Many Docker containers expect to run as root (and therefore edit all the contents of the filesystem). The Image Author's guide gives recommendations on making your image more secure by default:
Don't run as root
Make directories you want to write to group-writable and owned by group id 0
Set the net-bind capability on your executables if they need to bind to ports <1024
Otherwise, you can see the security documentation for descriptions on how to relax these restrictions.
I hope it helps.
Although you don't have access to root, your OpenShift container, by default, is a member of the root group. You can change some dir/file permissions to avoid the Permission Denied errors.
If you're using a Dockerfile to deploy an image to OpenShift, you can add the following RUN command to your Dockerfile:
RUN chgrp -R 0 /run && chmod -R g=u /run
This will change the group for everything in the /run directory to the root group and then set the group permission on all files to be equivalent to the owner (group equals user) of the file. Essentially, any user in the root group has the same permissions as the owner for every file.
You can run docker as any user , also root (and not Openshift default build-in account UID - 1000030000 when issuing this two commands in sequence on command line oc cli tools
oc login -u system:admin -n default following with oc adm policy add-scc-to-user anyuid -z default -n projectname where projectname is name of your project inside which you assigned under your docker

Vagrant synced_folder not working with docker provider build_dir (Windows)

I don't succeed providing a dockerfile via Vagrant on Windows. If I use an image (e.g. d.image = "phusion/baseimage" instead of build_dir everything is fine - but when building from a dockerfile (as shown in the vagrantfile below) - I get the following error (of course I have a Dockerfile in infrastructure/ssh-docker!):
PS C:\privat\cloud-backup\cloud-backup-for-podio> vagrant up
Bringing machine 'app' up with 'docker' provider...
==> app: Docker host is required. One will be created if necessary...
app: Docker host VM is already ready.
==> app: Syncing folders to the host VM...
app: Preparing SMB shared folders...
app: Mounting SMB shared folders...
C:/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.7.4/plugins/guests/linux/cap/choose_addressable_ip_addr.rb:7:in `block
in choose_addressable_ip_addr': undefined method `each' for nil:NilClass (NoMethodError)
from C:/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.7.4/plugins/guests/linux/cap/choose_addressable_ip_addr.rb:6:in `tap'
from C:/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.7.4/plugins/guests/linux/cap/choose_addressable_ip_addr.rb:6:in `choose_addressable_ip_addr'
from C:/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/capability_host.rb:111:in `call'
from C:/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/capability_host.rb:111:in `capability'
...
Vagrantfile:
Vagrant.configure("2") do |config
config.ssh.username = 'vagrant'
config.ssh.password = 'tcuser'
config.ssh.port = 22
config.vm.define "app" do |app|
app.vm.synced_folder ".", "/vagrant", type: "smb", smb_host: "MY_IP", smb_username: "WINUSER#DOMAIN", smb_password: "WINPASSWORD"
app.vm.provider "docker" do |d|
#d.image = "phusion/baseimage"
d.build_dir = "infrastructure/ssh-docker"
d.name = "app"
d.remains_running = true
end
end
end
Dockerfile:
FROM phusion/baseimage
ENV HOME /root
# enable ssh
RUN rm -f /etc/service/sshd/down
# Regenerate SSH host keys. baseimage-docker does not contain any, so you
# have to do that yourself. You may also comment out this instruction; the
# init system will auto-generate one during boot.
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
RUN apt-get update
RUN apt-get install -y openssh-server wget lsb-release sudo
EXPOSE 22
RUN mkdir -p /var/run/sshd
RUN chmod 0755 /var/run/sshd
# Create and configure vagrant user
RUN useradd --create-home -s /bin/bash vagrant
WORKDIR /home/vagrant
# Configure SSH access
RUN mkdir -p /home/vagrant/.ssh
RUN echo "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key" > /home/vagrant/.ssh/authorized_keys
RUN chown -R vagrant: /home/vagrant/.ssh
RUN echo -n 'vagrant:vagrant' | chpasswd
# Enable passwordless sudo for the "vagrant" user
RUN mkdir -p /etc/sudoers.d
RUN install -b -m 0440 /dev/null /etc/sudoers.d/vagrant
RUN echo 'vagrant ALL=NOPASSWD: ALL' >> /etc/sudoers.d/vagrant
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
Vagrant version: 1.7.4
Anyone has some idea?
(What I need to do is run a docker image from a dockerfile and have a shared/synced directory..)
You should provide your ip in for smb service
config.vm.synced_folder ".", "/home/vagrant/github-api", type: 'smb', smb_host: "192.168.1.100"

Resources