Kubernetes/Helm - Obtaining Docker image version from deployment - docker

I'm using Kubernetes+Helm - and wanting to ask if it possible to get the Docker version as specified in the spec containers. So forexample, I have the deployment below:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: myrepo.com/animage:0.0.3
imagePullPolicy: Always
command: ["/bin/sh","-c"]
args: [“work”]
envFrom:
- configMapRef:
name: test
I then have another deployment where I would like to get that Docker version number 0.0.3 and set it as an env var.
Any ideas appreciated.
Thanks.

Short answer: No. At least not directly. Although there are two workarounds I can see that you might find viable.
First, apart of providint image with your version tag, set a label/annotation on the pod indicating it's version and use Downward API to pass that data down to your container
https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
Second, if you actually own the process to build that image, you can easily bake the version in during docker build with something like :
Dockerfile:
FROM ubuntu
ARG VERSION=undefined
ENV VERSION=$VERSION
with build command like
docker build --build-arg VERSION=0.0.3 -t myrepo.com/animage:0.0.3 .
which will give you an image with a baked in env var with your version value

Related

DOCKER COPY not reflected in container

Dockerfile does not copy file from gitlab directory. Below is the dockerfile,
FROM docker.elastic.co/logstash/logstash:7.16.3
USER root
RUN yum install -y curl dos2unix
COPY scripts/somefile.sh /src/app/
RUN dos2unix /src/app/somefile.sh
WORKDIR /src/app/
ENTRYPOINT ["/src/app/somefile.sh"]
The Project tree looks like the following,
C:.
├───.idea
│ └───codeStyles
├───deployment
│ ├───dev
└───scripts
│ ├───somefile.sh
below is deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: demo-app
name: demo-app
namespace: --
spec:
replicas: 1
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
app: demo-app
spec:
containers:
- image: docker-image-described
name: demo-app
resources: ...
volumeMounts:
- name: some-secret
mountPath: /src/abc/secret
readOnly: true
securityContext:
privileged: true
restartPolicy: Always
volumes:
- name: very-secret
secret:
secretName: some-secret
Further, building the image using gitlab-ci file. And then creating a deployment using this image. Please help me understand what am I doing wrong as when I exec inside the running pod, I can't see the file in the defined destination location.
I don't get any errors, which I usually do if there is an error about wrong source location. I also did go through some more similar questions, so I already checked that the EOL is Unix.
Also, I would like to add some observations,
I cannot create the directory as well with RUN mkdir -p /src/app/scripts/ This also runs without any error, just not
reflected inside the pod.
This is probably due to using an old (wrong) docker image, so changes or updates are not reflected.
Kubernetes does not pull a new image version if the image is already present if the image tag is not "latest". Verify that the image is correctly build by pulling it and running it locally with docker. Use a different tag to force kubernetes to pull the image.

Running python script after container is up (Kubernetes)

I'm using the following docker image: https://github.com/budtmo/docker-android It's Docker image for Android emulators.
I'm run It using Kubernetes with the following deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: android-deployment
spec:
selector:
matchLabels:
app: android-emulator
replicas: 10
template:
metadata:
labels:
app: android-emulator
spec:
containers:
- name: android-emulator
image: budtmo/docker-android-x86-8.1
ports:
- containerPort: 6080
- containerPort: 5554
- containerPort: 5555
env:
- name: DEVICE
value: "Samsung Galaxy S8"
After the container is running its automatic start the Android emulator (don't know exactly how).
I need to run python script automatic after the container is up for each running container,
How can I do it? What should I change in my deployment file?
You could simply create a Dockerfile to build your own image from the budtmo/docker-android-x86-8.1 base image and deploy this. Within the Dockerfile you define the start command or entrypoint.
UPDATE
I think I understand, correct me, if I am wrong: You want run your python script against the Android emulator running in Kubernetes.
As I said, I am not really firm with Kubernetes but couldn't you run the Android emulator as an init container, and the python script itself in the "main" container?
Like described here: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

Kubernetes: The code change does not appear, is there a way to sync?

In Dockerfile I have mentioned volume like:
COPY src/ /var/www/html/ but somehow my code changes don't appear like it used to only with Docker. Unless I remove Pods, it does not appear. How to sync it?
I am using minikube.
webserver.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver
labels:
app: apache
spec:
replicas: 3
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: php-apache
image: learningk8s_website
imagePullPolicy: Never
ports:
- containerPort: 80
When your container spec says:
image: learningk8s_website
imagePullPolicy: Never
The second time you kubectl apply it, Kubernetes determines that it's exactly the same as the Deployment spec you already have and does nothing. Even if it did generate new Pods, the server is highly likely to notice that it already has an image learningk8s_website:latest and won't pull a new one; indeed, you're explicitly telling Kubernetes not to.
The usual practice here is to include some unique identifier in the image name, such as a date stamp or commit hash.
IMAGE=$REGISTRY/name/learningk8s_website:$(git rev-parse --short HEAD)
docker build -t "$IMAGE" .
docker push "$IMAGE"
You then need to make the corresponding change in the Deployment spec and kubectl apply it. This will cause Kubernetes to notice that there is some change in the pod spec, create new pods with the new image, and destroy the old pods (in that order). You may find a templating engine like Helm to be useful to make it easier to inject this value into the YAML.

What is the workflow to build with docker AND docker-compose?

I have this repo, and docker-compose up will launch the project, create 2 containers (a DB and API), and everything works.
Now I want to build and deploy to Kubernetes. I try docker-compose build but it complains there's no Dockerfile. So I start writing a Dockerfile and then discover that docker/Dockerfiles don't support loading ENV vars from an env_file or .env file. What gives? How am I expected to build this image? Could somebody please enlighten me?
What is the intended workflow for building a docker image with the appropriate environment variables?
Those environment variables shouldn't be set at docker build step but at running the application on Kubernetes or docker-compose.
So:
Write a Dockerfile and place it at root folder. Something like this:
FROM node
COPY package.json .
RUN npm install
COPY . .
ENTRYPOINT ["npm", "start"]
Modify docker-compose.yaml. In the image field you must specify the name for the image to be built. It should be something like this:
image: YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
There is no need to set user and working_dir
Build the image with docker-compose build (you can also do this with docker build)
Now you can use docker-compose up to run your app locally, with the .env file
To deploy it on Kubernetes you need to publish your image in dockerhub (unless you run Kubernetes locally):
docker push YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
Finally, create a Kubernetes manifest. Sadly kubernetes doesn't support env files as docker-compose do, you'll need to manually set these variables in the manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: platform-api
labels:
app: platform-api
spec:
replicas: 1
selector:
matchLabels:
app: platform-api
template:
metadata:
labels:
app: platform-api
spec:
containers:
- name: platform-api
image: YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
ports:
- containerPort: 8080
env:
- name: NODE_ENV
value: develop
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: platform-db
labels:
app: platform-db
spec:
replicas: 1
selector:
matchLabels:
app: platform-db
template:
metadata:
labels:
app: platform-db
spec:
containers:
- name: arangodb
image: YOUR-DOCKERHUB-USERNAME/node-rest-auth-arangodb
ports:
- containerPort: 8529
env:
- name: ARANGO_ROOT_PASSWORD
value: localhost
Deploy it with kubectl create
Please note that this code is just indicative, I don't know exactly your user case. Find more information in docker-compose and kubernetes docs and tutorials. Good luck!
I've updated the project on github, it now all works, and the readme documents how to run it.
I realized that env vars are considered runtime vars, which is why --env-file is an option for docker run and not docker build. This must also (I assume) be why docker-compose.yml has the env_file option, which I assume just passes the file to docker build. And in Kubernetes, I think these are passed in from a configmap. This is done so the image remains more portable; same project can be run with different vars passed in, no rebuild required.
Thanks ignacio-millán for the input.

Kubernetes Workflow

I have been using kubernetes for a while now.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0+2831379", GitCommit:"283137936a
498aed572ee22af6774b6fb6e9fd94", GitTreeState:"not a git tree", BuildDate:"2016-07-05T15:40:25Z", GoV
ersion:"go1.6.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db
386f62781338b0483733b3", GitTreeState:"clean", BuildDate:"", GoVersion:"", Compiler:"", Platform:""}
I usually set an Ingress, Service and Replication Controller for each project.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: portifolio
name: portifolio-ingress
spec:
rules:
- host: www.cescoferraro.xyz
http:
paths:
- path: /
backend:
serviceName: portifolio
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: portifolio
name: portifolio
labels:
name: portifolio
spec:
selector:
name: portifolio
ports:
- name: web
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: v1
kind: ReplicationController
metadata:
namespace: portifolio
name: portifolio
labels:
name: portifolio
spec:
replicas: 1
selector:
name: portifolio
template:
metadata:
namespace: portifolio
labels:
name: portifolio
spec:
containers:
- image: cescoferraro/portifolio:latest
imagePullPolicy: Always
name: portifolio
env:
- name: KUBERNETES
value: "true"
- name: BRANCH
value: "production"
My "problem" is that for deploying my app I usually do:
kubectl -f delete kubernetes.yaml
kubectl -f create kubernetes.yaml
I wish I could use a single command to deploy, whenever my app is up or down. Rolling updates do not work when I use the same image,(I think its a bug on my kubernetes server version). But it also do not work when the app has never been deployed at all.
I have read about Deployments, I wonder how it would help me?
Goals
1. Deploy if app is brand new
2. Replace existing pods with new ones using a new image from docker registry.
I don't think keeping all resources inside one single manifest helps you with what you want to achieve, since your Service, Ingress and ReplicationController are not likely to change simultaneously.
If all you want to do is roll out new pods, I would recommend you to replace your ReplicationController with a Deployment. Manifests have almost the exact same syntax so it's easy to migrate from standard RCs, and you could perform a server-side rolling update with a single kubectl replace -f manifest.yml.
Please note that even with a Deployment resource you can't trigger a redeployment if nothing changed in your manifest. kubectl replace would just do nothing. Therefore you could for example increment or change a tag inside your manifest in order to force the deployment, if needed (eg. revision: 003).
As already written in the previous answer, it is recommended to use a Deployment instead of a ReplicationController for this.
Using imagePullPolicy: Always will only ensure that Kubernetes does a docker pull before starting new PODs. It does not force recreation of PODs when nothing in the Deployment resource changes.
I would suggest to add 2 things to your solution:
Add a label to the Deployment with the value CURRENT_DATE as a placeholder value
Add a simple shell script to your project which replaces the placeholder with the current date+time and then uses kubectl to apply the resources.
Example Bash script
#!/usr/bin/env bash
sed "s/CURRENT_DATE/$(date)/" kubernetes.yaml | kubectl apply -f -
Then use this script for redeployment instead of calling kubectl by yourself.
This is only meant as a very simple example. When it comes to creating/applying/patching resources in Kubernetes, things tend to get more and more complicated by time. If this happens, consider using some more advanced templating solutions, e.g. by using Python and Jinja2.
You could use a deployment for this. Create it the first time, and after that you only need to do kubectl set image deploy/my-app app=user/image:tag --record and you're good to go.
Doing that, you can also do cool things like kubectl rollout undo deploy/my-app or get history and status.
You might consider using Argo.
Argo is an open-source workflow engine for Kubernetes. It allows to define complex microservices-based application deployment using YAML in source repo and automatically re-deploy app on YAML change (e.g. on every commit to production branch) .

Resources