Converting dockerfile + bash script into OpenShift deployment config - docker

I have this dockerfile:
FROM centos:latest
COPY mongodb.repo /etc/yum.repos.d/mongodb.repo
RUN yum update -y && \
yum install -y mongodb-org mongodb-org-shell iproute nano
RUN mkdir -p /data/db && chown -R mongod:mongod /data
That I can build and run locally just fine with docker with:
docker build -t my-image .
docker run -t -d -p 27017:27017 --name my-container my-image
docker cp mongod.conf my-container:/etc/mongod.conf
docker exec -u mongod my-container "mongod --config /etc/mongod.conf"
Now I would like run the container in OpenShift. I have managed to build and push the image to a namespace. And have created the below deploymentConfig that runs the container - just like I can do locally.
- apiVersion: v1
kind: DeploymentConfig
metadata:
name: my-app
namespace: my-namespace
spec:
replicas: 1
selector:
app: my-app
template:
metadata:
labels:
app: my-app
deploymentconfig: my-app
spec:
containers:
- name: my-app
image: ${IMAGE}
command: ["mongod"]
ports:
- containerPort: 27017
imagePullPolicy: Always
When I click deploy the image is pulled successfully but I get the error:
exception in initAndListen: IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating
Why does it not have read/write on the /data/db folder? As can be seen from above Dockerfile that folder is created and the mongod user is owner of that folder.
Is it necessary to grant the mongod user read/write on that folder in the deploymentConfig as well somehow?

Docker files will run as an unknown, arbirtrary, non-root user (put simply imagine mongodb running as user 1000000001 but there is no guarantee that will be the number chosen). This may mean the mongod user is not the selected user causing these issues so check the documentation for guidelines of supporting Arbitrary User IDs.
For an image to support running as an arbitrary user, directories and files that may be written to by processes in the image should be owned by the root group and be read/writable by that group. Files to be executed should also have group execute permissions.
Adding the following to your Dockerfile sets the directory and file permissions to allow users in the root group to access them in the built image:
RUN chgrp -R 0 /some/directory && \ <- Set the group to 0 (the root group)
chmod -R g=u /some/directory <- Here you will set your permissions
Your mongod.conf can be mounted as a ConfigMap - an Openshift object often used to manage configuration that can be mounted as a (read only) volume
Edit - Command explanation
chgrp vs chown
chown is also perfectly fine to use here but as we are only interested in updating the "group", chgrp provides a little extra security to make sure that is the only thing that is changed (due to mistyped command etc.)
chmod -R g=u
chmod is used to change the file permissions;
-R tells it to recurse through the path (in case there are subdirectories they will also receive the same permissions);
g=u means "group=user" or "give the group the same permissions as those that are already there for the user"

Related

Package installed in dockerfile inaccessable in manifest file

I'm quite new to kubernetes and docker.
I am trying to create a kubernetes CronJob which will, every x minutes, clone a repo, build the docker file in that repo, then apply the manifest file to create the job.
When I install git in the CronJob dockerfile, when I run any git command in the kubernetes manifest file, it doesn't recognise it. How should I go about fixing this please?
FROM python:3.8.10
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y git
RUN useradd -rm -d /home/worker -s /bin/bash -g root -G sudo -u 1001 worker
WORKDIR /home/worker
COPY . /home/worker
RUN chown -R 1001:1001 .
USER worker
ENTRYPOINT ["/bin/bash"]
apiVersion: "batch/v1"
kind: CronJob
metadata:
name: cron-job-test
namespace: me
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: Always
command:
- /bin/sh
- -c
args:
- git log;
restartPolicy: OnFailure
You should use the correct image that has git binary installed to run git commands. In the manifest you are using image: busybox:1.28 to run the pod which doesnt have git installed. Hence you are getting the error.
Use correct image name and try

How to run only specific command as root and other commands with default user in docker-compose

This is my Dockerfile.
FROM python:3.8.12-slim-bullseye as prod-env
RUN apt-get update && apt-get install unzip vim -y
COPY requirements.txt /app
RUN pip install -r requirements.txt
USER nobody:nogroup
This is how docker-compose.yml looks like.
api_server:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
ports:
- 8200:8200
command: gunicorn -b 0.0.0.0:8200 --threads "8" --log-level info --reload "server:gunicorn_app(command='start', project='app_server')"
I want to add permissions read, write and execute permissions on shared directories.
And also need to run couple of other coommands as root.
So I have to execute this command with root every time after image is built.
docker exec -it -u root api_server_1 bash -c "python copy_stuffs.py; chmod -R a+rwx models; chmod -R a+rwx /images"
Now, I want docker-compose to execute these lines.
But as you can see, user in docker-compose has to be nobody as specified by Dockerfile. So how can I execute root commands in docker-compose file?
Option that I've been thinking:
Install sudo command from Dockerfile and use sudo
Is there any better way ?
In docker-compose.yml create another service using same image and volumes.
Override user with user: root:root, command: your_command_to_run_as_root, for this new service and add dependency to run this new service before starting regular working container.
api_server:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
ports:
- 8200:8200
command: gunicorn -b 0.0.0.0:8200 --threads "8" --log-level info --reload "server:gunicorn_app(command='start', project='app_server')"
# This make sure that startup order is correct and api_server_decorator service is starting first
depends_on:
- api_server_decorator
api_server_decorator:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
# No ports needed - it is only decorator
# Overriding USER with root:root
user: "root:root"
# Overriding command
command: python copy_stuffs.py; chmod -R a+rwx models; chmod -R a+rwx /images
There are other possibilities like changing Dockerfile by removing USER restriction and then you can use entrypoint script doing as root what you want as privileged user and running su - nobody or better exec gosu to retain PID=1 and proper signal handling.
In my eyes the approach giving a container root rights is quite hacky and dangerous.
If you want to e.g. remove the files written by container you need root rights on host as well.
If you want to allow a container to access files on host filesystem just run the container with appropriate user.
api_server:
user: my_docker_user:my_docker_group
then give on host the rights to that group
sudo chown -R my_docker_user:my_docker_group models
You should build all of the content you need into the image itself, especially if you have this use case of occasionally needing to run a process to update it (you are not trying to use an isolation tool like Docker to simulate a local development environment). In your Dockerfile, COPY these directories into the image
COPY shared/model_server/models /models
COPY static/images /images
Do not make these directories writeable, and do not make the individual files in the directories executable. The directories will generally be mode 0755 and the files mode 0644, owned by root, and that's fine.
In the Compose setup, do not mount host content over these directories either. You should just have:
services:
api_server:
build: . # use the same image in all environments
image: company/server
ports:
- 8200:8200
# no volumes:, do not override the image's command:
Now when you want to update the files, you can rebuild the image (without interrupting the running application, without docker exec, and without an alternate user)
docker-compose build api_server
and then do a relatively quick restart, running a new container on the updated image
docker-compose up -d

Openshift - Run a basic container with alpine, java and jmeter

In an Openshift environment (Kubernetes v1.18.3+47c0e71)
I am trying to run a very basic container which will contain:
Alpine (latest version)
JDK 1.8
Jmeter 5.3
I just want it to boot and run in a container, expecting connections to run Jmeter CLI from the command line terminal.
I have gotten this to work perfectly in my local Docker distribution. This is the Dokerfile content:
FROM alpine:latest
ARG JMETER_VERSION="5.3"
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
ENV JMETER_BIN ${JMETER_HOME}/bin
ENV JMETER_DOWNLOAD_URL https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz
USER root
ARG TZ="Europe/Amsterdam"
RUN apk update \
&& apk upgrade \
&& apk add ca-certificates \
&& update-ca-certificates \
&& apk add --update openjdk8-jre tzdata curl unzip bash \
&& apk add --no-cache nss \
&& rm -rf /var/cache/apk/ \
&& mkdir -p /tmp/dependencies \
&& curl -L --silent ${JMETER_DOWNLOAD_URL} > /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz \
&& mkdir -p /opt \
&& tar -xzf /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz -C /opt \
&& rm -rf /tmp/dependencies
# Set global PATH such that "jmeter" command is found
ENV PATH $PATH:$JMETER_BIN
WORKDIR ${JMETER_HOME}
For some reason, when I configure a Pod with a container with that exact configuration previously uploaded to a private Docker images registry, it does not work.
This is the Deployment configuration (yaml) file (very basic aswell):
apiVersion: apps/v1
kind: Deployment
metadata:
name: jmeter
namespace: myNamespace
labels:
app: jmeter
group: myGroup
spec:
selector:
matchLabels:
app: jmeter
replicas: 1
template:
metadata:
labels:
app: jmeter
spec:
containers:
- name: jmeter
image: myprivateregistry.azurecr.io/jmeter:dev
resources:
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 100m
memory: 500Mi
imagePullPolicy: Always
restartPolicy: Always
imagePullSecrets:
- name: myregistrysecret
Unfortunately, I am not getting any logs:
A screenshot of the Pod events:
Unfortunately, not getting either to access the terminal of the container:
Any idea on:
how to get further logs?
what is going on?
On your local machine, you are likely using docker run -it <my_container_image> or similar. Using the -it option will run an interactive shell in your container without you specifying a CMD and will keep that shell running as the primary process started in your container. So by using this command, you are basically already specifying a command.
Kubernetes expects that the container image contains a process that is run on start (CMD) and that will run as long as the container is alive (for example a webserver).
In your case, Kubernetes is starting the container, but you are not specifying what should happen when the container image is started. This leads to the container immediately terminating, which is what you can see in the Events above. Because you are using a Deployment, the failing Pod is then restarted again and again.
A possible workaround to this is to run the sleep command in your container on startup by specifing a command in your Pod like so:
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: alpine
command: ["/bin/sleep", "infinite"]
restartPolicy: OnFailure
(Kubernetes documentation)
This will start the Pod and immediately run the /bin/sleep infinite command, leading to the primary process being this sleep process that will never terminate. Your container will now run indefinitely. Now you can use oc rsh <name_of_the_pod to connect to the container and run anything you would like interactively (for example jmeter).

How can I write to a directory owned by ROOT when not running as ROOT for a Tomcat Docker Image that I don't want to edit?

I have a docker image that I am trying to run using K8s. I can get it to run on my home environment but not at my workplace as we cannot run as root on the k8 cluster.
The docker image is a Tomcat server with a WAR file that lives here:
/usr/local/tomcat/webapps/ROOT.war
During start-up Tomcat tries to explode the WAR into a directory here:
/usr/local/tomcat/webapps/ROOT
But it can't do this because /usr/local/tomcat/webapps/ is owned by ROOT.
So I thought the best way to solve was to mount a volume with an emptyDir{} like so:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
...
name: test-container
volumeMounts:
- mountPath: //usr/local/tomcat/webapps/ROOT
name: tomcat
volumes:
- name: tomcat
emptyDir: {}
But this doesn't work because it just makes an empty ROOT folder under webapps which Tomcat can't explode the WAR to because it expects to create ROOT it self.
I also tried this:
volumeMounts:
- mountPath: //usr/local/tomcat/webapps
name: tomcat
But now /webapps is just an empty folder because I assume I'm overwriting what the container is setting up for me when it starts up.
I'm obviously missing something fundemental here...I don't want to edit the image as I believe there must be another way around this I simply want /tomcat/webapps to be writable by the runAsUser which isn't root.
What is the best way to do this?
If you don't already have one, just create a Dockerfile for your image and add following lines into it:
...
ENV USER=<YOUR-CONTAINER-USER>
ENV UID=10014
ENV GID=10014
RUN addgroup "$USER" --gid "$GID" \
&& adduser \
--disabled-password \
--gecos "" \
--home "$(pwd)" \
--ingroup "$USER" \
--no-create-home \
--uid "$UID" \
"$USER"
RUN chown -R "$USER":"$USER" /usr/local/tomcat/webapps/ROOT
USER "$USER"
...
The simple way to fix that is via an initContainer:, although fsGroup: is the more declarative way of fixing that, so long as your cluster's security policy allows setting that field
spec:
initContainers:
- name: chown
image: docker.io/library/busybox:latest
command:
- chown
- -R
- whatever-the-uid-is-for-your-tomcat-image
- /usr/local/tomcat/webapps/ROOT
volumeMounts:
- mountPath: /usr/local/tomcat/webapps/ROOT
name: tomcat
containers:
# ... as before
I propose solving this on a Docker level, since you most likely will have to create a child image for your application anyway. This approach has the merit that you can test before deploying to K8s. Also, you produce a higher quality image and not one that requires "tweaking" for save deployment.
Doing this with Docker would look like this:
FROM tomcat
RUN addgroup --system -gid 1000 app && adduser --system -uid 1000 -gid 1000 app
RUN chown -R app /usr/local/tomcat/
USER app
COPY --chown=1000:root your-war /usr/local/tomcat/webapps
Any K8s based solution, like overwriting the webapp folder with a volume and assigning non-root permissions to that volume, suffers from the problem that it will be hard to copy your WAR into the webapp folder. While complicated contraptions with init-containers would be theoretically possible, they are brittle and overly complicated (for starters you would have to ensure same users and groups in init container and main container. Also, your WAR would need to be in the init container).

Concurrent access to docker.sock on k8s

I would like to ask you for a help/advice with the following issue. We are using Bamboo as our CI and we have remote bamboo agents running on k8s.
In our build we have step that creates a Docker image when tests ran correctly. To remote bamboo agents we are exposing Docker via docker.socket. When we had only one remote bamboo agent (to test how it works) everything was working correctly but recently we have increased the number of remote agents. Now it happen quite oft that a build gets stuck in docker image build step and will not move. We have to stop the build and run it again. Usually in logs is no useful info, but once in while this will appear.
24-May-2017 16:04:54 Execution failed for task ':...'.
24-May-2017 16:04:54 > Docker execution failed
24-May-2017 16:04:54 Command line [docker build -t ...] returned:
24-May-2017 16:04:54 time="2017-05-24T16:04:54+02:00" level=info msg="device or resource busy"
This how our k8s deployment looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: bamboo-agent
namespace: backend-ci
spec:
replicas: 5
template:
metadata:
labels:
app: bamboo-agent
spec:
containers:
- name: bamboo-agent
stdin: true
resources:
.
env:
.
.
.
ports:
- .
volumeMounts:
- name: dockersocket
mountPath: /var/run/docker.sock
volumes:
- hostPath:
path: /var/run/docker.sock
name: dockersocket
And here is Dockerfile for remote bamboo agent.
FROM java:8
ENV CI true
RUN apt-get update && apt-get install -yq curl && apt-get -yqq install docker.io && apt-get install tzdata -yq
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && mv kubectl /usr/local/bin
RUN echo $TZ | tee /etc/timezone
RUN dpkg-reconfigure --frontend noninteractive tzdata
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64
RUN chmod +x /usr/local/bin/dumb-init
ADD run.sh /root
ADD .dockercfg /root
ADD config /root/.kube/
ADD config.json /root/.docker/
ADD gradle.properties /root/.gradle/
ADD bamboo-capabilities.properties /root
ENTRYPOINT ["/usr/local/bin/dumb-init", "--"]
CMD /root/run.sh
Is there some way how to solve this issue? And is exposing docker.socket a good solution or is there some better approach?
I have read few articles about Docker in docker but I do not like --privileged mode.
If you need some other information I will try to provide them.
Thank you.
One of the things you can do is run your builds on rkt while running kubernetes on docker?

Resources