I have a docker image with base image ubuntu and tomcat installed later on that image. After the docker build, I am able to run the docker image locally without any issue. But when it is deployed on OpenShift, it fails to start.
Dockerfile
FROM ubuntu:latest
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get -y install openjdk-8-jdk wget
RUN wget http://apache.stu.edu.tw/tomcat/tomcat-8/v8.5.58/bin/apache-tomcat-8.5.58.tar.gz -O /tmp/tomcat.tar.gz && \
cd /tmp && tar xvfz tomcat.tar.gz && \
cp -Rv /tmp/apache-tomcat-8.5.58/* /usr/local/tomcat/
EXPOSE 8080
CMD /usr/local/tomcat/bin/catalina.sh run
By default, OpenShift Container Platform runs containers using an arbitrarily assigned user ID. For an image to support running as an arbitrary user, directories and files that may be written to by processes in the image should be owned by the root group and be read/writable by that group. Files to be executed should also have group execute permissions.
Here is the modified Dockerfile
FROM ubuntu:latest
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get -y install openjdk-8-jdk wget
RUN wget http://apache.stu.edu.tw/tomcat/tomcat-8/v8.5.58/bin/apache-tomcat-8.5.58.tar.gz -O /tmp/tomcat.tar.gz && \
cd /tmp && tar xvfz tomcat.tar.gz && \
cp -Rv /tmp/apache-tomcat-8.5.58/* /usr/local/tomcat/
#Add a user ubuntu with UID 1001
RUN useradd -rm -d /home/ubuntu -s /bin/bash -g root -G sudo -u 1001 ubuntu && \
chown -R ubuntu:root /usr/local/tomcat && \
chgrp -R 0 /usr/local/tomcat && \
chmod -R g=u /usr/local/tomcat
#Specify the user with UID
USER 1001
EXPOSE 8080
CMD /usr/local/tomcat/bin/catalina.sh run
Refer section "Support Arbitrary User IDs" on the Guideline from Openshift
To relax the security in your cluster so that images are not forced to run as a pre-allocated UID, without granting everyone access to the privileged SCC:
Grant all authenticated users access to the anyuid SCC:
$ oc adm policy add-scc-to-group anyuid system:authenticated
This allows images to run as the root UID if no USER is specified in the Dockerfile.
Related
I have a docker file in which I do wget to copy something in the image. But the build is failing giving 'wget command not found'. WHen i googled I found suggestions to install wget like below
RUN apt update && apt upgrade
RUN apt install wget
Docker File:
FROM openjdk:17
LABEL maintainer="app"
ARG uname
ARG pwd
RUN useradd -ms /bin/bash -u 1000 user1
COPY . /app
WORKDIR /app
RUN ./gradlew build -PmavenUsername=$uname -PmavenPassword=$pwd
ARG YOURKIT_VERSION=2021.11
ARG POLARIS_YK_DIR=YourKit-JavaProfiler-2019.8
RUN wget https://www.yourkit.com/download/docker/YourKit-JavaProfiler-${YOURKIT_VERSION}-docker.zip --no-check-certificate -P /tmp/ && \
unzip /tmp/YourKit-JavaProfiler-${YOURKIT_VERSION}-docker.zip -d /usr/local && \
mv /usr/local/YourKit-JavaProfiler-${YOURKIT_VERSION} /usr/local/$POLARIS_YK_DIR && \
rm /tmp/YourKit-JavaProfiler-${YOURKIT_VERSION}-docker.zip
EXPOSE 10001
EXPOSE 8080
EXPOSE 5005
USER 1000
ENTRYPOINT ["sh", "/docker_entrypoint.sh"]
On doing this I am getting error app-get not found. Can some one suggest any solution.
The openjdk image you use is based on Oracle Linux which uses microdnf rather than apt as it's package manager.
To install wget (and unzip which you also need), you can add this to your Dockerfile:
RUN microdnf update \
&& microdnf install --nodocs wget unzip \
&& microdnf clean all \
&& rm -rf /var/cache/yum
The commands clean up the package cache after installing, to keep the image size as small as possible.
I have defined a node in the jenkins cloud config with the following docker file, everything is fine, except when I run a job it runs as root user.
FROM jenkins/inbound-agent:alpine as jnlp
FROM maven:3.6.3-jdk-11
ARG DOCKER_VERSION=18.03.0-ce
ARG DOCKER_COMPOSE_VERSION=1.21.0
ARG USER=jenkins
USER root
COPY --from=jnlp /usr/local/bin/jenkins-agent /usr/local/bin/jenkins-agent
COPY --from=jnlp /usr/share/jenkins/agent.jar /usr/share/jenkins/agent.jar
RUN apt-get install ca-certificates wget -y \
&& rm -r /var/lib/apt/lists /var/cache/apt/archives \
&& wget https://storage.googleapis.com/kubernetes-release/release/v1.19.0/bin/linux/amd64/kubectl -q -O /usr/local/bin/kubectl \
&& chmod a+x /usr/local/bin/kubectl
RUN curl -fsSL https://download.docker.com/linux/static/stable/`uname -m`/docker-$DOCKER_VERSION.tgz | tar --strip-components=1 -xz -C /usr/local/bin docker/docker
RUN curl -fsSL https://github.com/docker/compose/releases/download/$DOCKER_COMPOSE_VERSION/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose && chmod +x /usr/local/bin/docker-compose
RUN touch /debug-flag
RUN useradd jenkins
USER jenkins
ENTRYPOINT ["/usr/local/bin/jenkins-agent"]
When my job start, the user is root. And I need to run with a non root user to some jobs, my shared library on jenkins work well.
The only problem that I have is when I run some test on some projects with a database embedded, due to it needs to run in a non root user.
Check your Pod Template and Container Template to see if you set the "runAsUser" and "runAsGroup". You can set to 1000 as default "jenkins" user uid and gid.
I am struggling with permissions on docker volume, I get access denied for writing.
This is a small part of my docker file
FROM ubuntu:18.04
RUN apt-get update && \
apt-get install -y \
apt-transport-https \
build-essential \
ca-certificates \
curl \
vim && \............
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - && apt-get install -y nodejs
# Add non-root user
ARG USER=user01
RUN useradd -Um -d /home/$USER -s /bin/bash $USER && \
apt install -y python3-pip && \
pip3 install qrcode[pil]
#Copy that startup.sh into the scripts folder
COPY /scripts/startup.sh /scripts/startup.sh
#Making the startup.sh executable
RUN chmod -v +x /scripts/startup.sh
#Copy node API files
COPY --chown=user1 /node_api/* /home/user1/
USER $USER
WORKDIR /home/$USER
# Expose needed ports
EXPOSE 3000
VOLUME /data_storage
ENTRYPOINT [ "/scripts/startup.sh" ]
Also a small part of my startup.sh
#!/bin/bash
/usr/share/lib/provision.py --enterprise-seed $ENTERPRISE_SEED > config.json
Then my docker builds command:
sudo docker build -t mycontainer .
And the docker run command:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer
The problem I have is that the Python script will create the folder: /home/user01/.client and it will copy some files in there. That always worked fine. But now I want those files, which are data files, in a volume for backup porpuses. And as I am mapping with my volume I get permissions denied, so the python script is not able to write anymore.
So at the end of my dockerfile this instructions combined with the mapping in the docker run command give me the permission denied:
VOLUME /data_storage
Any suggestions on how to resolve this? some more permissions needed for the "user01"?
Thanks
I was able to resolve my issue by removing the "volume" command from the dockerfile and just doing the mapping at the moment of executing the docker run:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer
I am new to Docker, I have setup Docker Container on an Amazon Linux box.
I have a docker file which installs tomcat java and a war.
I can see all the installations present in the docker container when I navigate through the container in the exact folders I have mentioned in the Docker file.
When I run the Docker container it says tomcat server has started and I have also tailed the logs so I can see the service is running.
But when I open the host IP URL and 8080 port it says URL can't be reached.
These are the commands to build and run the file which works fine and I can see the status as running.
docker build -t friendly1 .
docker run -p 8080:8080 friendly1
What am I missing here? Request some help on this.
FROM centos:latest
RUN yum -y update && \
yum -y install wget && \
yum -y install tar && \
yum -y install zip unzip
ENV JAVA_HOME /opt/java/jdk1.7.0_67/
ENV CATALINA_HOME /opt/tomcat/apache-tomcat-7.0.70
ENV SAVIYNT_HOME /opt/tomcat/apache-tomcat-7.0.70/webapps
ENV PATH $PATH:$JAVA_HOME/jre/jdk1.7.0_67/bin:$CATALINA_HOME/bin:$CATALINA_HOME/scripts:$CATALINA_HOME/apache-tomcat-7.0.70/bin
ENV JAVA_VERSION 7u67
ENV JAVA_BUILD 7u67
RUN mkdir /opt/java/
RUN wget https://<S3location>/jdk-7u67-linux-x64.gz && \
tar -xvf jdk-7u67-linux-x64.gz && \
#rm jdk*.gz && \
mv jdk* /opt/java/
# Install Tomcat
ENV TOMCAT_MAJOR 7
ENV TOMCAT_VERSION 7.0.70
RUN mkdir /opt/tomcat/
RUN wget https://<s3location>/apache-tomcat-7.0.70.tar.gz && \
tar -xvf apache-tomcat-${TOMCAT_VERSION}.tar.gz && \
#rm apache-tomcat*.tar.gz && \
mv apache-tomcat* /opt/tomcat/
RUN chmod +x ${CATALINA_HOME}/bin/*sh
WORKDIR /opt/tomcat/apache-tomcat-7.0.70/
CMD "startup.sh" && tail -f /opt/tomcat/apache-tomcat-7.0.70/logs/*
EXPOSE 8080
I want to mount s3fs inside of docker container.
I made docker image with s3fs, and did like this:
host$ docker run -it --rm docker/s3fs bash
[ root#container:~ ]$ s3fs s3bucket /mnt/s3bucket -o allow_other -o allow_other,default_acl=public-read -ouse_cache=/tmp
fuse: failed to open /dev/fuse: Operation not permitted
Showing "Operation not permitted" error.
So I googled, and did like this (adding --privileged=true) again:
host$ docker run -it --rm --privileged=true docker/s3fs bash
[ root#container:~ ]$ s3fs s3bucket /mnt/s3bucket -o allow_other -o allow_other,default_acl=public-read -ouse_cache=/tmp
[ root#container:~ ]$ ls /mnt/s3bucket
ls: cannot access /mnt/s3bucket: Transport endpoint is not connected
[ root#container:~ ]$ fusermount -u /mnt/s3bucket
[ root#container:~ ]$ s3fs s3bucket /mnt/s3bucket -o allow_other -o allow_other,default_acl=public-read -ouse_cache=/tmp
[ root#container:~ ]$ ls /mnt/s3bucket
ls: cannot access /mnt/s3bucket: Transport endpoint is not connected
Then, mounting not shows error, but if run ls command, "Transport endpoint is not connected" error is occured.
How can I mount s3fs inside of docker container?
Is it impossible?
[UPDATED]
Add Dockerfile configuration.
Dockerfile:
FROM dockerfile/ubuntu
RUN apt-get update
RUN apt-get install -y build-essential
RUN apt-get install -y libfuse-dev
RUN apt-get install -y fuse
RUN apt-get install -y libcurl4-openssl-dev
RUN apt-get install -y libxml2-dev
RUN apt-get install -y mime-support
RUN \
cd /usr/src && \
wget http://s3fs.googlecode.com/files/s3fs-1.74.tar.gz && \
tar xvzf s3fs-1.74.tar.gz && \
cd s3fs-1.74/ && \
./configure --prefix=/usr && \
make && make install
ADD passwd/passwd-s3fs /etc/passwd-s3fs
ADD rules.d/99-fuse.rules /etc/udev/rules.d/99-fuse.rules
RUN chmod 640 /etc/passwd-s3fs
RUN mkdir /mnt/s3bucket
rules.d/99-fuse.rules:
KERNEL==fuse, MODE=0777
I'm not sure what you did that did not work, but I was able to get this to work like this:
Dockerfile:
FROM ubuntu:12.04
RUN apt-get update -qq
RUN apt-get install -y build-essential libfuse-dev fuse-utils libcurl4-openssl-dev libxml2-dev mime-support automake libtool wget tar
RUN wget https://github.com/s3fs-fuse/s3fs-fuse/archive/v1.77.tar.gz -O /usr/src/v1.77.tar.gz
RUN tar xvz -C /usr/src -f /usr/src/v1.77.tar.gz
RUN cd /usr/src/s3fs-fuse-1.77 && ./autogen.sh && ./configure --prefix=/usr && make && make install
RUN mkdir /s3bucket
After building with:
docker build --rm -t ubuntu/s3fs:latest .
I ran the container with:
docker run -it -e AWSACCESSKEYID=obscured -e AWSSECRETACCESSKEY=obscured --privileged ubuntu/s3fs:latest bash
and then inside the container:
root#efa2689dca96:/# s3fs s3bucket /s3bucket
root#efa2689dca96:/# ls /s3bucket
testing.this.out work.please working
root#efa2689dca96:/#
which successfully listed the files in my s3bucket.
You do need to make sure the kernel on your host machine supports fuse, but it would seem you have already done so?
Note: Your S3 mountpoint will not show/work from inside other containers when using Docker's --volume or --volumes-from directives. For example:
docker run -t --detach --name testmount -v /s3bucket -e AWSACCESSKEYID=obscured -e AWSSECRETACCESSKEY=obscured --privileged --entrypoint /usr/bin/s3fs ubuntu/s3fs:latest -f s3bucket /s3bucket
docker run -it --volumes-from testmount --entrypoint /bin/ls ubuntu:12.04 -ahl /s3bucket
total 8.0K
drwxr-xr-x 2 root root 4.0K Aug 21 21:32 .
drwxr-xr-x 51 root root 4.0K Aug 21 21:33 ..
returns no files even though there are files in the bucket.
Adding another solution.
Dockerfile:
FROM ubuntu:16.04
# Update and install packages
RUN DEBIAN_FRONTEND=noninteractive apt-get -y update --fix-missing && \
apt-get install -y automake autotools-dev g++ git libcurl4-gnutls-dev wget libfuse-dev libssl-dev libxml2-dev make pkg-config
# Clone and run s3fs-fuse
RUN git clone https://github.com/s3fs-fuse/s3fs-fuse.git /tmp/s3fs-fuse && \
cd /tmp/s3fs-fuse && ./autogen.sh && ./configure && make && make install && ldconfig && /usr/local/bin/s3fs --version
# Remove packages
RUN DEBIAN_FRONTEND=noninteractive apt-get purge -y wget automake autotools-dev g++ git make && \
apt-get -y autoremove --purge && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Set user and group
ENV USER='appuser'
ENV GROUP='appuser'
ENV UID='1000'
ENV GID='1000'
RUN groupadd -g $GID $GROUP && \
useradd -u $UID -g $GROUP -s /bin/sh -m $USER
# Install fuse
RUN apt-get update && \
apt install fuse && \
chown ${USER}.${GROUP} /usr/local/bin/s3fs
# Config fuse
RUN chmod a+r /etc/fuse.conf && \
perl -i -pe 's/#user_allow_other/user_allow_other/g' /etc/fuse.conf
# Copy credentials
ENV SECRET_FILE_PATH=/home/${USER}/passwd-s3fs
COPY ./passwd-s3fs $SECRET_FILE_PATH
RUN chmod 600 $SECRET_FILE_PATH && \
chown ${USER}.${GROUP} $SECRET_FILE_PATH
# Switch to user
USER ${UID}:${GID}
# Create mnt point
ENV MNT_POINT_PATH=/home/${USER}/data
RUN mkdir -p $MNT_POINT_PATH && \
chmod g+w $MNT_POINT_PATH
# Execute
ENV S3_BUCKET = ''
WORKDIR /home/${USER}
CMD exec sleep 100000 && /usr/local/bin/s3fs $S3_BUCKET $MNT_POINT_PATH -o passwd_file=passwd-s3fs -o allow_other
docker-compose-yaml:
version: '3.8'
services:
s3fs:
privileged: true
image: <image-name:tag>
##Debug
#stdin_open: true # docker run -i
#tty: true # docker run -t
environment:
- S3_BUCKET=my-bucket-name
devices:
- "/dev/fuse"
cap_add:
- SYS_ADMIN
- DAC_READ_SEARCH
cap_drop:
- NET_ADMIN
Build image with docker build -t <image-name:tag> .
Run with: docker-compose -d up
If you would prefer to use docker-compose for testing on your localhost use the following. Note you don't need to use --privileged flag as we are passing --cap-add SYS_ADMIN --device /dev/fuse flags in the docker-compose.yml
create file .env
AWS_ACCESS_KEY_ID=xxxxxx
AWS_SECRET_ACCESS_KEY=xxxxxx
AWS_BUCKET_NAME=xxxxxx
create file docker-compose.yml
version: "3"
services:
s3-fuse:
image: debian-aws-s3-mount
restart: always
build:
context: .
dockerfile: Dockerfile
environment:
- AWSACCESSKEYID=${AWS_ACCESS_KEY_ID}
- AWSSECRETACCESSKEY=${AWS_SECRET_ACCESS_KEY}
- AWS_BUCKET_NAME=${AWS_BUCKET_NAME}
cap_add:
- SYS_ADMIN
devices:
- /dev/fuse
create file Dockerfile. i.e You can use any docker image you prefer but first, check if your distro is supported here
FROM node:16-bullseye
RUN apt-get update -qq
RUN apt-get install -y s3fs
RUN mkdir /s3_mnt
To run container execute:
$ docker-compose run --rm -t s3-fuse /bin/bash
Once inside the container. You can mount your s3 Bucket by running the command:
# s3fs ${AWS_BUCKET_NAME} s3_mnt/
Note: For this setup to work .env, Dockerfile and docker-compose.yml must be created in the same directory. Don't forget to update your .env file with the correct credentials to the s3 bucket.