Openshift - Run a basic container with alpine, java and jmeter - docker

In an Openshift environment (Kubernetes v1.18.3+47c0e71)
I am trying to run a very basic container which will contain:
Alpine (latest version)
JDK 1.8
Jmeter 5.3
I just want it to boot and run in a container, expecting connections to run Jmeter CLI from the command line terminal.
I have gotten this to work perfectly in my local Docker distribution. This is the Dokerfile content:
FROM alpine:latest
ARG JMETER_VERSION="5.3"
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
ENV JMETER_BIN ${JMETER_HOME}/bin
ENV JMETER_DOWNLOAD_URL https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz
USER root
ARG TZ="Europe/Amsterdam"
RUN apk update \
&& apk upgrade \
&& apk add ca-certificates \
&& update-ca-certificates \
&& apk add --update openjdk8-jre tzdata curl unzip bash \
&& apk add --no-cache nss \
&& rm -rf /var/cache/apk/ \
&& mkdir -p /tmp/dependencies \
&& curl -L --silent ${JMETER_DOWNLOAD_URL} > /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz \
&& mkdir -p /opt \
&& tar -xzf /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz -C /opt \
&& rm -rf /tmp/dependencies
# Set global PATH such that "jmeter" command is found
ENV PATH $PATH:$JMETER_BIN
WORKDIR ${JMETER_HOME}
For some reason, when I configure a Pod with a container with that exact configuration previously uploaded to a private Docker images registry, it does not work.
This is the Deployment configuration (yaml) file (very basic aswell):
apiVersion: apps/v1
kind: Deployment
metadata:
name: jmeter
namespace: myNamespace
labels:
app: jmeter
group: myGroup
spec:
selector:
matchLabels:
app: jmeter
replicas: 1
template:
metadata:
labels:
app: jmeter
spec:
containers:
- name: jmeter
image: myprivateregistry.azurecr.io/jmeter:dev
resources:
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 100m
memory: 500Mi
imagePullPolicy: Always
restartPolicy: Always
imagePullSecrets:
- name: myregistrysecret
Unfortunately, I am not getting any logs:
A screenshot of the Pod events:
Unfortunately, not getting either to access the terminal of the container:
Any idea on:
how to get further logs?
what is going on?

On your local machine, you are likely using docker run -it <my_container_image> or similar. Using the -it option will run an interactive shell in your container without you specifying a CMD and will keep that shell running as the primary process started in your container. So by using this command, you are basically already specifying a command.
Kubernetes expects that the container image contains a process that is run on start (CMD) and that will run as long as the container is alive (for example a webserver).
In your case, Kubernetes is starting the container, but you are not specifying what should happen when the container image is started. This leads to the container immediately terminating, which is what you can see in the Events above. Because you are using a Deployment, the failing Pod is then restarted again and again.
A possible workaround to this is to run the sleep command in your container on startup by specifing a command in your Pod like so:
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: alpine
command: ["/bin/sleep", "infinite"]
restartPolicy: OnFailure
(Kubernetes documentation)
This will start the Pod and immediately run the /bin/sleep infinite command, leading to the primary process being this sleep process that will never terminate. Your container will now run indefinitely. Now you can use oc rsh <name_of_the_pod to connect to the container and run anything you would like interactively (for example jmeter).

Related

Package installed in dockerfile inaccessable in manifest file

I'm quite new to kubernetes and docker.
I am trying to create a kubernetes CronJob which will, every x minutes, clone a repo, build the docker file in that repo, then apply the manifest file to create the job.
When I install git in the CronJob dockerfile, when I run any git command in the kubernetes manifest file, it doesn't recognise it. How should I go about fixing this please?
FROM python:3.8.10
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y git
RUN useradd -rm -d /home/worker -s /bin/bash -g root -G sudo -u 1001 worker
WORKDIR /home/worker
COPY . /home/worker
RUN chown -R 1001:1001 .
USER worker
ENTRYPOINT ["/bin/bash"]
apiVersion: "batch/v1"
kind: CronJob
metadata:
name: cron-job-test
namespace: me
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: Always
command:
- /bin/sh
- -c
args:
- git log;
restartPolicy: OnFailure
You should use the correct image that has git binary installed to run git commands. In the manifest you are using image: busybox:1.28 to run the pod which doesnt have git installed. Hence you are getting the error.
Use correct image name and try

Trying to figure out how to get this executable containerised for docker

I am having rough time trying to create a docker image that exposes Cloudflare's Tunnel executable for linux. Thus far I got to this stage with my docker image for it (image comes from https://github.com/jakejarvis/docker-cloudflare-argo/blob/master/Dockerfile)
FROM ubuntu:18.04
LABEL maintainer="Jake Jarvis <jake#jarv.is>"
RUN apt-get update \
&& apt-get install -y --no-install-recommends wget ca-certificates \
&& rm -rf /var/lib/apt/lists/*
RUN wget -O cloudflared.tgz https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.tgz \
&& tar -xzvf cloudflared.tgz \
&& rm cloudflared.tgz \
&& chmod +x cloudflared
ENTRYPOINT ["./cloudflared"]
And following official documentation for their kubernetes setup I added it to my deployment as a sidecar via: (here cloudflare-argo:5 is image built from dockerfile above)
- name: cloudflare-argo
image: my-registry/cloudflare-argo:5
imagePullPolicy: Always
command: ["cloudflared", "tunnel"]
args:
- --url=http://localhost:8080
- --hostname=my-website
- --origincert=/etc/cloudflared/cert.pem
- --no-autoupdate
volumeMounts:
- mountPath: /etc/cloudflared
name: cloudflare-argo-secret
readOnly: true
resources:
requests:
cpu: "50m"
limits:
cpu: "200m"
volumes:
- name: cloudflare-argo-secret
secret:
secretName: my-website.com
However once I deploy I get CrashLoopBackOff error on my pod with following kubectl describe output
Created container cloudflare-argo
Error: failed to start container "cloudflare-argo": Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"cloudflared\": executable file not found in $PATH": unknown
In the dockerfile it is ./cloudflared, so that would be:
command:
- ./cloudflared
- tunnel
- --url=http://localhost:8080
- --hostname=my-website
- --origincert=/etc/cloudflared/cert.pem
- --no-autoupdate
(also there is no reason to use both command and args, just pick one, if you drop the first item then use args).
In your Dockerfile move the cloudflared binary to /usr/local/bin folder instead of running it from current WORKDIR.
&& chmod +x cloudflared \
&& mv cloudflared /usr/local/bin
ENTRYPOINT ["cloudflared"]

Is there a way to update Jenkins running in Kubernetes?

I'm trying to run Jenkins in Kubernetes but the version of Jenkins is outdated. It says I need atleast version 2.138.4 for the Kubernetes plugin.
Im using this jenkins image from Docker hub ("jenkins/jenkins:lts"). But when I try to run this in Kubernetes it says the version is 2.60.3. I previously used a really old version of Jenkins (2.60.3) but I updated my Dockerfile to use the latest image. After that I build the image again and threw it to Kubernetes. I even delete my Kubernetes Deployment and Service before deploying them again.
I'm currently working in a development environment using Minikube.
Dockerfile:
FROM jenkins/jenkins:lts
ENV JENKINS_USER admin
ENV JENKINS_PASS admin
# Skip initial setup
ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/plugins.txt
USER root
RUN apt-get update \
&& apt-get install -qqy apt-transport-https ca-certificates curl gnupg2 software-properties-common
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
RUN apt-get update -qq \
&& apt-get install docker-ce -y
RUN usermod -aG docker jenkins
RUN apt-get clean
RUN curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose && chmod +x /usr/local/bin/docker-compose
USER jenkins
The Kubernetes deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: mikemanders/my-jenkins-image:1.0
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
And the Kubernetes Service:
apiVersion: v1
kind: Service
metadata:
name: jenkins
spec:
type: NodePort
selector:
app: jenkins
ports:
- port: 8080
targetPort: 8080
I think my Kubernetes configuration are good, so I'm guessing it has something to do with Docker?
What am I missing/doing wrong here?
TL;DR
To update a deployment, you need new Docker image based on the new Jenkins release:
docker build -t mikemanders/my-jenkins-image:1.1 .
docker push mikemanders/my-jenkins-image
kubectl set image deployment/jenkins mikemanders/my-jenkins-image=1.1 --record
Kubernetes deploys images not dockerfiles
As per Images man
You create your Docker image and push it to a registry before referring to it in a Kubernetes pod.
The image property of a container supports the same syntax as the docker command does, including private registries and tags.
So, you need an image to deploy.
Update your image
To update your image in registry, use docker build -t and docker push:
docker build -t mikemanders/my-jenkins-image:1.1
docker push mikemanders/my-jenkins-image
It will rebuild the image with updated jenkins/jeinkis:lts. Then image will be uploaded to the container registry.
The catch is that you are updating the image version (e.g. 1.0->1.1) before updating the cluster.

Docker-in-Docker in AKS

We have been tasked with setting up a container-based Jenkins deployment, and there is strong pressure to do this in AKS. Our Jenkins needs to be able to build other containers. Normally I'd handle this with a docker-in-docker approach by mounting /var/run/docker.sock & /usr/bin/docker into my running container.
I do not know if this is possible in AKS or not. Some forum posts on GitHub suggest that host-mounting is possible but broken in the latest AKS relase. My limited experimentation with a Helm chart was met with this error:
Error: release jenkins4 failed: Deployment.apps "jenkins" is invalid:
[spec.template.spec.initContainers[0].volumeMounts[0].name: Required
value, spec.template.spec.initContainers[0].volumeMounts[0].name: Not
found: ""]
The change I made was to update the volumeMounts: section of jenkins-master-deployment.yaml and include the following:
-
type: HostPath
hostPath: /var/run/docker.sock
mountPath: /var/run/docker.sock
Is what I'm trying to do even possible based on AKS security settings, or did I just mess up my chart?
If it's not possible to mount the docker socket into a container in AKS, that's fine, I just need a definitive answer.
Thanks,
Well, we did this a while back for VSTS (cloud TFS, now called Azure DevOps) build agents, so it should be possible. The way we did it is also with mounting the docker.sock
The relevant part for us was:
... container spec ...
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-volume
volumes:
- name: docker-volume
hostPath:
path: /var/run/docker.sock
I have achieved the requirement using following manifests.
Our k8s manifest file carries this securityContext under pod definition.
securityContext:
privileged: true
In our Dockerfile we were installing Docker-inside-Docker like this way
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install curl wget -y
RUN apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release -y
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y
# last two lines of Dockerfile
COPY ./agent_startup.sh .
RUN chmod +x /agent_startup.sh
CMD ["/usr/sbin/init"]
CMD ["./agent_startup.sh"]
Content of agent_startup.sh file
#!/bin/bash
echo "DOCKER STARTS HERE"
service --status-all
service docker start
service docker start
docker version
docker ps
echo "DOCKER ENDS HERE"
sleep 100000
Sample k8s file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: build-agent
labels:
app: build-agent
spec:
replicas: 1
selector:
matchLabels:
app: build-agent
template:
metadata:
labels:
app: build-agent
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: build-agent
image: myecr-repo.azurecr.io/buildagent
securityContext:
privileged: true
When Dockerized agent pool was up, docker daemon was running inside docker container.
My Kubectl version
PS D:\Temp\temp> kubectl.exe version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.22.6
WARNING: version difference between client (1.25) and server (1.22) exceeds the supported minor version skew of +/-1
pod shell output:
root#**********-bcd967987-52wrv:/# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
**Disclaimer: Our kubernetes cluster version is 1.22 and base image is Ubuntu-18.04 and tested only to check if docker-inside-docker is running and not registered with Azure DevOps. You can modify startup script according to your need **

Concurrent access to docker.sock on k8s

I would like to ask you for a help/advice with the following issue. We are using Bamboo as our CI and we have remote bamboo agents running on k8s.
In our build we have step that creates a Docker image when tests ran correctly. To remote bamboo agents we are exposing Docker via docker.socket. When we had only one remote bamboo agent (to test how it works) everything was working correctly but recently we have increased the number of remote agents. Now it happen quite oft that a build gets stuck in docker image build step and will not move. We have to stop the build and run it again. Usually in logs is no useful info, but once in while this will appear.
24-May-2017 16:04:54 Execution failed for task ':...'.
24-May-2017 16:04:54 > Docker execution failed
24-May-2017 16:04:54 Command line [docker build -t ...] returned:
24-May-2017 16:04:54 time="2017-05-24T16:04:54+02:00" level=info msg="device or resource busy"
This how our k8s deployment looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: bamboo-agent
namespace: backend-ci
spec:
replicas: 5
template:
metadata:
labels:
app: bamboo-agent
spec:
containers:
- name: bamboo-agent
stdin: true
resources:
.
env:
.
.
.
ports:
- .
volumeMounts:
- name: dockersocket
mountPath: /var/run/docker.sock
volumes:
- hostPath:
path: /var/run/docker.sock
name: dockersocket
And here is Dockerfile for remote bamboo agent.
FROM java:8
ENV CI true
RUN apt-get update && apt-get install -yq curl && apt-get -yqq install docker.io && apt-get install tzdata -yq
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && mv kubectl /usr/local/bin
RUN echo $TZ | tee /etc/timezone
RUN dpkg-reconfigure --frontend noninteractive tzdata
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64
RUN chmod +x /usr/local/bin/dumb-init
ADD run.sh /root
ADD .dockercfg /root
ADD config /root/.kube/
ADD config.json /root/.docker/
ADD gradle.properties /root/.gradle/
ADD bamboo-capabilities.properties /root
ENTRYPOINT ["/usr/local/bin/dumb-init", "--"]
CMD /root/run.sh
Is there some way how to solve this issue? And is exposing docker.socket a good solution or is there some better approach?
I have read few articles about Docker in docker but I do not like --privileged mode.
If you need some other information I will try to provide them.
Thank you.
One of the things you can do is run your builds on rkt while running kubernetes on docker?

Resources