I have tried many method to build my rails app to a docker image. And deploy it to google container engine. But until now, no one success.
My Dockerfile(Under rails root path)
FROM ruby:2.2.2
RUN apt-get update -qq && apt-get install -y build-essential
RUN apt-get install -y nodejs
ENV APP_HOME /myapp
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
ADD Gemfile $APP_HOME/Gemfile
ADD Gemfile.lock $APP_HOME/Gemfile.lock
ADD vendor/gems/my_gem $APP_HOME/vendor/gems/my_gem
ADD init.sh $APP_HOME/
RUN export LANG=C.UTF-8 && bundle install
ADD . $APP_HOME
CMD ["sh", "init.sh"]
My init.sh
#!/bin/bash
bundle exec rake db:create db:migrate
bundle exec rails server -b 0.0.0.0
My kubernetes config file
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 2
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- name: web
image: gcr.io/my-project-id/myapp:v1
ports:
- containerPort: 3000
name: http-server
env:
- name: RAILS_ENV
value: "production"
After I create web controller on gke with kubectl:
kubectl create -f web-controller.yml
and see the pod logs:
kubectl logs web-controller-xxxxx
it shows:
init.sh: 2: init.sh: bundle: not found
init.sh: 3: init.sh: bundle: not found
It seems the path not found. Then how to do?
Maybe you should execute your init.sh directly instead of sh init.sh? It would appear that the $PATH and maybe other ENV variables are not getting set for that sh init.sh shell. If you can exec into the container and which bundle shows the path to bundle, then you're losing your login ENVs when executing with sh init.sh.
If it helps at all, I've written a how-to on deploying Rails on GKE with Kubernetes. One thing you may want to change is that if you have several of your web pods running, they will all run the init.sh script and they will all attempt to db:migrate. There will be a race condition for which one migrates and in what order (if you have many). You probably only want to run db:migrate from one container during a deploy. You can use a Kubernetes Job to accomplish that or kubectl run migrator --image=us.gcr.io/your/image --rm --restart=Never or the like to execute the db:migrate task just once before rolling out your new web pods.
You can use kubectl exec to enter your container and print the environment.
http://kubernetes.io/v1.1/docs/user-guide/getting-into-containers.html
For example:
kubectl exec web-controller-xxxxx sh -c printenv
You could also use kubectl interactively to confirm that bundle is in your container image:
kubectl exec -ti web-controller-xxxxx sh
If bundle is in your image, then either add its directory to PATH in init.sh, or specify its path explicitly in each command.
Related
I'm quite new to kubernetes and docker.
I am trying to create a kubernetes CronJob which will, every x minutes, clone a repo, build the docker file in that repo, then apply the manifest file to create the job.
When I install git in the CronJob dockerfile, when I run any git command in the kubernetes manifest file, it doesn't recognise it. How should I go about fixing this please?
FROM python:3.8.10
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y git
RUN useradd -rm -d /home/worker -s /bin/bash -g root -G sudo -u 1001 worker
WORKDIR /home/worker
COPY . /home/worker
RUN chown -R 1001:1001 .
USER worker
ENTRYPOINT ["/bin/bash"]
apiVersion: "batch/v1"
kind: CronJob
metadata:
name: cron-job-test
namespace: me
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: Always
command:
- /bin/sh
- -c
args:
- git log;
restartPolicy: OnFailure
You should use the correct image that has git binary installed to run git commands. In the manifest you are using image: busybox:1.28 to run the pod which doesnt have git installed. Hence you are getting the error.
Use correct image name and try
In an Openshift environment (Kubernetes v1.18.3+47c0e71)
I am trying to run a very basic container which will contain:
Alpine (latest version)
JDK 1.8
Jmeter 5.3
I just want it to boot and run in a container, expecting connections to run Jmeter CLI from the command line terminal.
I have gotten this to work perfectly in my local Docker distribution. This is the Dokerfile content:
FROM alpine:latest
ARG JMETER_VERSION="5.3"
ENV JMETER_HOME /opt/apache-jmeter-${JMETER_VERSION}
ENV JMETER_BIN ${JMETER_HOME}/bin
ENV JMETER_DOWNLOAD_URL https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz
USER root
ARG TZ="Europe/Amsterdam"
RUN apk update \
&& apk upgrade \
&& apk add ca-certificates \
&& update-ca-certificates \
&& apk add --update openjdk8-jre tzdata curl unzip bash \
&& apk add --no-cache nss \
&& rm -rf /var/cache/apk/ \
&& mkdir -p /tmp/dependencies \
&& curl -L --silent ${JMETER_DOWNLOAD_URL} > /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz \
&& mkdir -p /opt \
&& tar -xzf /tmp/dependencies/apache-jmeter-${JMETER_VERSION}.tgz -C /opt \
&& rm -rf /tmp/dependencies
# Set global PATH such that "jmeter" command is found
ENV PATH $PATH:$JMETER_BIN
WORKDIR ${JMETER_HOME}
For some reason, when I configure a Pod with a container with that exact configuration previously uploaded to a private Docker images registry, it does not work.
This is the Deployment configuration (yaml) file (very basic aswell):
apiVersion: apps/v1
kind: Deployment
metadata:
name: jmeter
namespace: myNamespace
labels:
app: jmeter
group: myGroup
spec:
selector:
matchLabels:
app: jmeter
replicas: 1
template:
metadata:
labels:
app: jmeter
spec:
containers:
- name: jmeter
image: myprivateregistry.azurecr.io/jmeter:dev
resources:
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 100m
memory: 500Mi
imagePullPolicy: Always
restartPolicy: Always
imagePullSecrets:
- name: myregistrysecret
Unfortunately, I am not getting any logs:
A screenshot of the Pod events:
Unfortunately, not getting either to access the terminal of the container:
Any idea on:
how to get further logs?
what is going on?
On your local machine, you are likely using docker run -it <my_container_image> or similar. Using the -it option will run an interactive shell in your container without you specifying a CMD and will keep that shell running as the primary process started in your container. So by using this command, you are basically already specifying a command.
Kubernetes expects that the container image contains a process that is run on start (CMD) and that will run as long as the container is alive (for example a webserver).
In your case, Kubernetes is starting the container, but you are not specifying what should happen when the container image is started. This leads to the container immediately terminating, which is what you can see in the Events above. Because you are using a Deployment, the failing Pod is then restarted again and again.
A possible workaround to this is to run the sleep command in your container on startup by specifing a command in your Pod like so:
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: alpine
command: ["/bin/sleep", "infinite"]
restartPolicy: OnFailure
(Kubernetes documentation)
This will start the Pod and immediately run the /bin/sleep infinite command, leading to the primary process being this sleep process that will never terminate. Your container will now run indefinitely. Now you can use oc rsh <name_of_the_pod to connect to the container and run anything you would like interactively (for example jmeter).
I have the following dockerfile:
FROM node:8 as build
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
COPY package.json /usr/src/app/package.json
RUN npm install
COPY . /usr/src/app
publish to our artifactory. However, as there is no command / entrypoint provided, the docker would simply end immediately. so I usually use docker run -d -t to run it. However, when deploying it in kubernetes, I cannot specify the args -d and -t since I will get an error that node does not know the arguments -d and -t.
When adding the following entrypoint,
ENTRYPOINT [ "tail", "-f", "/dev/null"]
The machine keeps crashing
How can I keep the pod running in background?
Make use of -i and --tty option of kubectl run command.
kubectl run -i --tty --image=<image> <name> --port=80 --env="DOMAIN=cluster"
More info here.
Update:
In case of yaml files make use of stdin and tty option.
apiVersion: v1
kind: Pod
metadata:
name: testpod
spec:
containers:
- name: testpod
image: testimage
stdin: true
tty: true
More info here.
I got the same case. Besides
stdin: true
tty: true
I also need to add:
command:
- /bin/bash
I have this dockerfile:
FROM centos:latest
COPY mongodb.repo /etc/yum.repos.d/mongodb.repo
RUN yum update -y && \
yum install -y mongodb-org mongodb-org-shell iproute nano
RUN mkdir -p /data/db && chown -R mongod:mongod /data
That I can build and run locally just fine with docker with:
docker build -t my-image .
docker run -t -d -p 27017:27017 --name my-container my-image
docker cp mongod.conf my-container:/etc/mongod.conf
docker exec -u mongod my-container "mongod --config /etc/mongod.conf"
Now I would like run the container in OpenShift. I have managed to build and push the image to a namespace. And have created the below deploymentConfig that runs the container - just like I can do locally.
- apiVersion: v1
kind: DeploymentConfig
metadata:
name: my-app
namespace: my-namespace
spec:
replicas: 1
selector:
app: my-app
template:
metadata:
labels:
app: my-app
deploymentconfig: my-app
spec:
containers:
- name: my-app
image: ${IMAGE}
command: ["mongod"]
ports:
- containerPort: 27017
imagePullPolicy: Always
When I click deploy the image is pulled successfully but I get the error:
exception in initAndListen: IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating
Why does it not have read/write on the /data/db folder? As can be seen from above Dockerfile that folder is created and the mongod user is owner of that folder.
Is it necessary to grant the mongod user read/write on that folder in the deploymentConfig as well somehow?
Docker files will run as an unknown, arbirtrary, non-root user (put simply imagine mongodb running as user 1000000001 but there is no guarantee that will be the number chosen). This may mean the mongod user is not the selected user causing these issues so check the documentation for guidelines of supporting Arbitrary User IDs.
For an image to support running as an arbitrary user, directories and files that may be written to by processes in the image should be owned by the root group and be read/writable by that group. Files to be executed should also have group execute permissions.
Adding the following to your Dockerfile sets the directory and file permissions to allow users in the root group to access them in the built image:
RUN chgrp -R 0 /some/directory && \ <- Set the group to 0 (the root group)
chmod -R g=u /some/directory <- Here you will set your permissions
Your mongod.conf can be mounted as a ConfigMap - an Openshift object often used to manage configuration that can be mounted as a (read only) volume
Edit - Command explanation
chgrp vs chown
chown is also perfectly fine to use here but as we are only interested in updating the "group", chgrp provides a little extra security to make sure that is the only thing that is changed (due to mistyped command etc.)
chmod -R g=u
chmod is used to change the file permissions;
-R tells it to recurse through the path (in case there are subdirectories they will also receive the same permissions);
g=u means "group=user" or "give the group the same permissions as those that are already there for the user"
I would like to ask you for a help/advice with the following issue. We are using Bamboo as our CI and we have remote bamboo agents running on k8s.
In our build we have step that creates a Docker image when tests ran correctly. To remote bamboo agents we are exposing Docker via docker.socket. When we had only one remote bamboo agent (to test how it works) everything was working correctly but recently we have increased the number of remote agents. Now it happen quite oft that a build gets stuck in docker image build step and will not move. We have to stop the build and run it again. Usually in logs is no useful info, but once in while this will appear.
24-May-2017 16:04:54 Execution failed for task ':...'.
24-May-2017 16:04:54 > Docker execution failed
24-May-2017 16:04:54 Command line [docker build -t ...] returned:
24-May-2017 16:04:54 time="2017-05-24T16:04:54+02:00" level=info msg="device or resource busy"
This how our k8s deployment looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: bamboo-agent
namespace: backend-ci
spec:
replicas: 5
template:
metadata:
labels:
app: bamboo-agent
spec:
containers:
- name: bamboo-agent
stdin: true
resources:
.
env:
.
.
.
ports:
- .
volumeMounts:
- name: dockersocket
mountPath: /var/run/docker.sock
volumes:
- hostPath:
path: /var/run/docker.sock
name: dockersocket
And here is Dockerfile for remote bamboo agent.
FROM java:8
ENV CI true
RUN apt-get update && apt-get install -yq curl && apt-get -yqq install docker.io && apt-get install tzdata -yq
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && mv kubectl /usr/local/bin
RUN echo $TZ | tee /etc/timezone
RUN dpkg-reconfigure --frontend noninteractive tzdata
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64
RUN chmod +x /usr/local/bin/dumb-init
ADD run.sh /root
ADD .dockercfg /root
ADD config /root/.kube/
ADD config.json /root/.docker/
ADD gradle.properties /root/.gradle/
ADD bamboo-capabilities.properties /root
ENTRYPOINT ["/usr/local/bin/dumb-init", "--"]
CMD /root/run.sh
Is there some way how to solve this issue? And is exposing docker.socket a good solution or is there some better approach?
I have read few articles about Docker in docker but I do not like --privileged mode.
If you need some other information I will try to provide them.
Thank you.
One of the things you can do is run your builds on rkt while running kubernetes on docker?