How to keep docker pod without entrypoint running in K8S? - docker

I have the following dockerfile:
FROM node:8 as build
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
COPY package.json /usr/src/app/package.json
RUN npm install
COPY . /usr/src/app
publish to our artifactory. However, as there is no command / entrypoint provided, the docker would simply end immediately. so I usually use docker run -d -t to run it. However, when deploying it in kubernetes, I cannot specify the args -d and -t since I will get an error that node does not know the arguments -d and -t.
When adding the following entrypoint,
ENTRYPOINT [ "tail", "-f", "/dev/null"]
The machine keeps crashing
How can I keep the pod running in background?

Make use of -i and --tty option of kubectl run command.
kubectl run -i --tty --image=<image> <name> --port=80 --env="DOMAIN=cluster"
More info here.
Update:
In case of yaml files make use of stdin and tty option.
apiVersion: v1
kind: Pod
metadata:
name: testpod
spec:
containers:
- name: testpod
image: testimage
stdin: true
tty: true
More info here.

I got the same case. Besides
stdin: true
tty: true
I also need to add:
command:
- /bin/bash

Related

Package installed in dockerfile inaccessable in manifest file

I'm quite new to kubernetes and docker.
I am trying to create a kubernetes CronJob which will, every x minutes, clone a repo, build the docker file in that repo, then apply the manifest file to create the job.
When I install git in the CronJob dockerfile, when I run any git command in the kubernetes manifest file, it doesn't recognise it. How should I go about fixing this please?
FROM python:3.8.10
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y git
RUN useradd -rm -d /home/worker -s /bin/bash -g root -G sudo -u 1001 worker
WORKDIR /home/worker
COPY . /home/worker
RUN chown -R 1001:1001 .
USER worker
ENTRYPOINT ["/bin/bash"]
apiVersion: "batch/v1"
kind: CronJob
metadata:
name: cron-job-test
namespace: me
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: Always
command:
- /bin/sh
- -c
args:
- git log;
restartPolicy: OnFailure
You should use the correct image that has git binary installed to run git commands. In the manifest you are using image: busybox:1.28 to run the pod which doesnt have git installed. Hence you are getting the error.
Use correct image name and try

Adding Volume to docker container's /app Issue

I am having trouble creating a volume that maps to the directory "/app" in my container
This is basically so when I update the code I don't need to build the container again
This is my docker file
# stage 1
FROM node:latest as node
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build --prod
# stage 2
FROM nginx:alpine
COPY --from=node /app/dist/my-first-app /usr/share/nginx/html
I use this command to run the container
docker run -d -p 100:80/tcp -v ${PWD}/app:/app docker-testing:v1
and no volume gets linked to it.
However, if I were to do this
docker run -d -p 100:80/tcp -v ${PWD} docker-testing:v1
I do get a volume at least
Anything obvious that I am doing wrong?
Thanks
The ${PWD}:/app:/app should be ${PWD}/app:/app.
If you explode ${PWD}, you'd obtain something like /home/user/src/thingy:/app:/app which does not make much sense.
EDIT:
I'd suggest using docker-compose to avoid this kind of issues (it also simplify a lot the commands to start up docker).
In your case the docker-compose.yml would look like this:
docker run -d -p 100:80/tcp -v ${PWD}:/app:/app docker-testing:v1
version: "3"
services:
doctesting:
build: .
image: docker-testing:v1
volumes:
- "./app:/app"
ports:
- "100:80"
I didn't really test if it works, there might be typos...

Docker images using circleci

I am working on integrating CI/CD pipeline using Docker. How can i use dockercompose file to build and create container?
I have tried it in putting Dockerfile, and docker-compose.yml, but none of them works.
Below is docker-compose file :
FROM ruby:2.2
EXPOSE 8000
RUN mkdir -p /usr/src/app
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN bundle install
CMD ["ruby","app.rb"]
RUN docker build -t itsruby:itsruby .
RUN docker run -d itsruby:itsruby
Below is docker-compose.yml
version: 2
jobs:
build:
docker:
- image: circleci/ruby:2.2
steps:
- checkout
- run: CMD ["ruby","app.rb"]
- run: |
docker build -t itsruby:itsruby .
docker run -d itsruby:itsruby
test:
docker:
- image: circleci/ruby:2.2
steps:
- checkout
- run: CMD ["ruby","app.rb"]
The build is getting failed in circle/ci.

Concurrent access to docker.sock on k8s

I would like to ask you for a help/advice with the following issue. We are using Bamboo as our CI and we have remote bamboo agents running on k8s.
In our build we have step that creates a Docker image when tests ran correctly. To remote bamboo agents we are exposing Docker via docker.socket. When we had only one remote bamboo agent (to test how it works) everything was working correctly but recently we have increased the number of remote agents. Now it happen quite oft that a build gets stuck in docker image build step and will not move. We have to stop the build and run it again. Usually in logs is no useful info, but once in while this will appear.
24-May-2017 16:04:54 Execution failed for task ':...'.
24-May-2017 16:04:54 > Docker execution failed
24-May-2017 16:04:54 Command line [docker build -t ...] returned:
24-May-2017 16:04:54 time="2017-05-24T16:04:54+02:00" level=info msg="device or resource busy"
This how our k8s deployment looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: bamboo-agent
namespace: backend-ci
spec:
replicas: 5
template:
metadata:
labels:
app: bamboo-agent
spec:
containers:
- name: bamboo-agent
stdin: true
resources:
.
env:
.
.
.
ports:
- .
volumeMounts:
- name: dockersocket
mountPath: /var/run/docker.sock
volumes:
- hostPath:
path: /var/run/docker.sock
name: dockersocket
And here is Dockerfile for remote bamboo agent.
FROM java:8
ENV CI true
RUN apt-get update && apt-get install -yq curl && apt-get -yqq install docker.io && apt-get install tzdata -yq
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && mv kubectl /usr/local/bin
RUN echo $TZ | tee /etc/timezone
RUN dpkg-reconfigure --frontend noninteractive tzdata
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64
RUN chmod +x /usr/local/bin/dumb-init
ADD run.sh /root
ADD .dockercfg /root
ADD config /root/.kube/
ADD config.json /root/.docker/
ADD gradle.properties /root/.gradle/
ADD bamboo-capabilities.properties /root
ENTRYPOINT ["/usr/local/bin/dumb-init", "--"]
CMD /root/run.sh
Is there some way how to solve this issue? And is exposing docker.socket a good solution or is there some better approach?
I have read few articles about Docker in docker but I do not like --privileged mode.
If you need some other information I will try to provide them.
Thank you.
One of the things you can do is run your builds on rkt while running kubernetes on docker?

How to deploy a rails app to google container engine with kubernetes?

I have tried many method to build my rails app to a docker image. And deploy it to google container engine. But until now, no one success.
My Dockerfile(Under rails root path)
FROM ruby:2.2.2
RUN apt-get update -qq && apt-get install -y build-essential
RUN apt-get install -y nodejs
ENV APP_HOME /myapp
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
ADD Gemfile $APP_HOME/Gemfile
ADD Gemfile.lock $APP_HOME/Gemfile.lock
ADD vendor/gems/my_gem $APP_HOME/vendor/gems/my_gem
ADD init.sh $APP_HOME/
RUN export LANG=C.UTF-8 && bundle install
ADD . $APP_HOME
CMD ["sh", "init.sh"]
My init.sh
#!/bin/bash
bundle exec rake db:create db:migrate
bundle exec rails server -b 0.0.0.0
My kubernetes config file
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 2
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- name: web
image: gcr.io/my-project-id/myapp:v1
ports:
- containerPort: 3000
name: http-server
env:
- name: RAILS_ENV
value: "production"
After I create web controller on gke with kubectl:
kubectl create -f web-controller.yml
and see the pod logs:
kubectl logs web-controller-xxxxx
it shows:
init.sh: 2: init.sh: bundle: not found
init.sh: 3: init.sh: bundle: not found
It seems the path not found. Then how to do?
Maybe you should execute your init.sh directly instead of sh init.sh? It would appear that the $PATH and maybe other ENV variables are not getting set for that sh init.sh shell. If you can exec into the container and which bundle shows the path to bundle, then you're losing your login ENVs when executing with sh init.sh.
If it helps at all, I've written a how-to on deploying Rails on GKE with Kubernetes. One thing you may want to change is that if you have several of your web pods running, they will all run the init.sh script and they will all attempt to db:migrate. There will be a race condition for which one migrates and in what order (if you have many). You probably only want to run db:migrate from one container during a deploy. You can use a Kubernetes Job to accomplish that or kubectl run migrator --image=us.gcr.io/your/image --rm --restart=Never or the like to execute the db:migrate task just once before rolling out your new web pods.
You can use kubectl exec to enter your container and print the environment.
http://kubernetes.io/v1.1/docs/user-guide/getting-into-containers.html
For example:
kubectl exec web-controller-xxxxx sh -c printenv
You could also use kubectl interactively to confirm that bundle is in your container image:
kubectl exec -ti web-controller-xxxxx sh
If bundle is in your image, then either add its directory to PATH in init.sh, or specify its path explicitly in each command.

Resources