Kubernetes deployment.yaml for django+gunicorn+nginx - docker

I have created docker images using docker-compose.yml as below
version: '2'
services:
djangoapp:
build: .
volumes:
- .:/sig_app
- static_volume:/sig_app
networks:
- nginx_network
nginx:
image: nginx:1.13
ports:
- "80:80"
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/sig_app
depends_on:
- djangoapp
networks:
- nginx_network
networks:
nginx_network:
driver: bridge
volumes:
static_volume:
I have used docker-compose build and docker-compose up. The three images are created as below
kubernetes_djangoapp
docker.io/python
docker.io/nginx
I want to deploy the application into kubernetes using YAML file.
I am new to kubernetes.
Django application is running with port 8000
and Nginx with port 80

This should work:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deploy
spec:
replicas: 1
template:
metadata:
labels:
app: my-app
spec:
volumes:
- name: django-nginx
emptyDir: {}
- name: nginx-host
hostPath:
path: /config/nginx/conf.d
containers:
- name: djangoapp
image: kubernetes_djangoapp
volumeMounts:
- name: django-nginx
mountPath: /sig_app
- name: nginx
image: nginx:1.13
ports:
- containerPort: 80
volumeMounts:
- name: django-nginx
mountPath: /sig_app
- name: nginx-host
mountPath: /etc/nginx/conf.d
Note that you will have to modify some things to make it custom yours. I am missing where the image is. You should upload it to docker hub, or any registry of your choice.
About the volumes, here both containers share a non-persistent volume (django-nginx), which maps /sig_app directory in each container with each other. And another one that shares the container nginx (etc/nginx/conf.d) with your host (/config/nginx/conf.d), to pass the config file. A better way would be to use a ConfigMap. Check on that.
So, yeah, set the image for django and let me know if it doesn't work, and we will see whats failing.
Cheers

Take a look at Kompose. It will allow you to simply run the command
kompose up
to immediately deploy your docker-compose configuration to your cluster.
If you want to create a .yaml from your docker-compose file first to inspect and edit, you can run
kompose convert

Related

Trouble converting docker-compose containing volume to Kubernetes manifest

I am learning Kubernetes and I am starting to convert an existing docker-compose.yml to Kuberbetes manifest one component at a time. Here is the component I am currently trying to convert
version: '3'
services:
mongodb:
container_name: mongodb
restart: always
hostname: mongo-db
image: mongo:latest
networks:
- mynetwork
volumes:
- c:/MongoDb:c:/data/db
ports:
- 27017:27017
networks:
mynetwork:
name: mynetwork
I am able to log into the Mongo instance when the container is running in Docker but I cannot get this working in Kubernetes. Here is the Kubernetes manifests I have tried so far
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
spec:
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:latest
ports:
- containerPort: 27017
volumeMounts:
- name: mongodb-data
mountPath: /data/db
ports:
- containerPort: 27017
hostPort: 27017
volumes:
- name: mongodb-data
hostPath:
path: "C:\\MongoDb"
With this manifest I see the error Error: Error response from daemon: invalid mode: /data/db when I do a kubectl describe pods.
I understand the mapping of volumes from Docker to Kubernetes isn't 1-to-1, so is this reasonable to do in the Kubernetes space?
Thank you and apologies if this feels like a silly question.

Create kubernetes yml file from docker-compose.yml file

I have this docker-compose.yml file which I am using to run up three microservices and one api gateway
version: '3'
services:
serviceone:
container_name: serviceone
restart: always
build: serviceone/
ports:
- '3000:3000'
servicetwo:
container_name: servicetwo
restart: always
build: servicetwo/
ports:
- '3001:3001'
servicethree:
container_name: servicethree
restart: always
build: servicethree/
ports:
- '3002:3003'
apigateway:
container_name: timezoneapigateway
restart: always
build: timezone/
ports:
- '8080:8080'
links:
- serviceone
- servicetwo
- servicethree
Now I want to deploy these dockerimages in one pod in kubernetes so that the api gateway can connect with all the three microservices ,current version of api gateway is working but I am really not getting even a slightest hint of doing this in kubernetes. I am really new to kubernetes can anyone tell me how to design a kubernetes yml file to achieve this
You don't have to run all your service in the same pod. The standard in Kubernetes is to have separate deployments and services for all apps. Here is a deployment manifest for serviceone but you can easily modify it for servicetwo, servicethree and apigateway
apiVersion: apps/v1
kind: Deployment
metadata:
name: serviceone
labels:
app: serviceone
spec:
replicas: 1
selector:
matchLabels:
app: serviceone
template:
metadata:
labels:
app: serviceone
spec:
containers:
- name: serviceone
image: serviceone:latest
ports:
- containerPort: 3001
And the same goes for the service manifest
apiVersion: v1
kind: Service
metadata:
name: serviceone
spec:
selector:
app: serviceone
ports:
- protocol: TCP
port: 3001
targetPort: 3001
Your services will be accessible within the cluster like this:
serviceone:3001
servicetwo:3002
servicethree:3003
timezoneapigateway:8080

Why I can't access my application inside Kubernetes cluster through tomcat server?

I'm running a kubernetes cluster with minikube on ubuntu server 20.04.1 LTS. When I launch the deployments I can access the Tomcat server from the cluster, but I can't open my application, I got the error message :
HTTP Status 404 – Not Found
Here is my docker-compose file :
version: "3"
#NETWORK
networks:
network1:
driver: bridge
volumes:
dir-site:
driver_opts:
device: /smartclass_docker/site/
type: bind
o: bind
# Declare services
services:
# service name
server:
container_name: server
# Container image name
image: repo/myimage
## Image used to build Dockerfile
build:
#dockerfile: Dockerfile
context: ./smart/
# Container ports and host matching
ports:
- "3306:3306"
- "8080:8080"
## Startup Policy
restart driver: bridge
volumes:
dir-site:
driver_opts:
device: /smart/site/
type: bind
o: bind
# Declaration of services
services:
# service name
server:
container_name: server
# name of the image
image: repo/myimage
## Use IMAGE BUILD DOCKERFILE
build:
#dockerfile: Dockerfile
context: ./smart/
# CONTAINER ports
ports:
- "3306:3306"
- "8080:8080"
## Startup policy
restart: always
# connect to network1
networks:
- network1
volumes:
- /data/dockerStore/data/db_server_1:/var/lib/mysql/
restart: always
# connect to network1
networks:
- network1
volumes:
- /data/dockerStore/data/db_server_1:/var/lib/mysql/
Here is my kubernetes deployment file :
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
# type: LoadBalancer
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: jksun12/vdsaipro
command: ["/run.sh"]
ports:
- containerPort: 80
- containerPort: 3306
- containerPort: 9001
- containerPort: 9002
# volumeMounts:
# - name: myapp-pv-claim
# mountPath: /var/lib/mysql
# volumes:
# - name: myapp-pv-claim
# persistentVolumeClaim:
# claimName: myapp-pv-claim
---
apiVersion: apps/v1
kind: PersistentVolumeClaim
metadata:
name: myapp-pv-claim
labels:
app: myapp
spec:
accesModes:
- ReadWriteMany
resources:
requests:
storage: 4Gi
My application files are in the /smart directory, which is a subdirectory of the directory where I actually run docker-compose and build my main Dockerfile.
How can I solve it ?
Thanks
I checked the logs inside the containers, and I found that the errors come from the extraction of the .war file inside the container.
There was also an error in the version of the tomcat server used for the application. So I changed the version of tomcat to another version.
It's solved now. Thanks

Traefik rule need solution for sub domain

I want to run an application behind traefik not as a sub-domain, but something like: xyz.abc.com/m1 or xyz.abc.com/m2 and so on.
Which label will work for it. I have tried with PathPrefix but it's not working. A sample application Joomla is deployed on docker swarm mode . Can I use Nginx or Haproxy for the same ?? If so, How?
When putting multiple applications behind the same name, there's a good chance the app needs to know it's URL prefix isn't at root (as most apps are expecting to be deployed at the root). So, might need to look at that too.
Traefik-wise, a stack file might look like this. This is running in Swarm mode (since you mentioned Swarm), which requires the Traefik labels to be on the service, not on the container. As such, they have to be within the deploy.labels definition, not just labels.
Example using Traefik 1
version: "3.7"
services:
proxy:
image: traefik:1.7
command: --docker --docker.swarmMode
ports:
- 80:80
volumes:
- /var/run/docker.sock:/var/run/docker.sock
app1:
image: your-image-for-app1
deploy:
labels:
traefik.backend: app1
traefik.frontend.rule: PathPrefix:/m1
traefik.port: 80
app2:
image: your-image-for-app2
deploy:
labels:
traefik.backend: app2
traefik.frontend.rule: PathPrefix:/m2
traefik.port: 80
Example using Traefik 2
version: "3.7"
services:
proxy:
image: traefik:2.0
command: --providers.docker --providers.docker.swarmMode
ports:
- 80:80
volumes:
- /var/run/docker.sock:/var/run/docker.sock
app1:
image: your-image-for-app1
deploy:
labels:
traefik.http.routers.app1.rule: PathPrefix(`/m1`)
traefik.http.services.app1.loadbalancer.server.port: 80
app2:
image: your-image-for-app2
deploy:
labels:
traefik.http.routers.app2.rule: PathPrefix(`/m2`)
traefik.http.services.app2.loadbalancer.server.port: 80

Package.json file not found when using kubernetes

I trying to setup kubernetes on my local environment using docker. I've built the necessary docker image with this Dockerfile:
FROM node:9.11.1
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app/
EXPOSE 3002
CMD [ "npm", "start" ]
I then pushed this image to my private docker repo on the google cloud repository. Now i can confirm that i can push and pull the image from the cloud repo, so i then built a docker-compose using that repo as the image source file:
version: '3'
services:
redis:
image: redis
ports:
- 6379:6379
networks:
- my-network
mongodb:
image: mongo
ports:
- 27017:27017
volumes:
- ./db:/data/db
networks:
- my-network
my-test-app:
tty: true
image: gcr.io/my-test-app
ports:
- 3002:3002
depends_on:
- redis
- mongodb
networks:
- my-network
volumes:
- .:/usr/src/app
environment:
- REDIS_PORT=6379
- REDIS_HOST=redis
- DB_URI=mongodb://mongodb:27017/
command: bash -c "ls && npm install"
networks:
my-network:
driver: bridge
volumes:
mongodb:
Then finally building off of that i use Kubernetes kompose to generate my deployment file which looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.12.0 ()
creationTimestamp: null
labels:
io.kompose.service: my-test-app
name: my-test-app
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: my-test-app
spec:
imagePullSecrets:
- gcr-json-key
containers:
- args:
- bash
- -c
- ls && npm install
env:
- name: DB_URI
value: mongodb://mongodb:27017/
- name: REDIS_HOST
value: redis
- name: REDIS_PORT
value: "6379"
image: gcr.io/my-test-app
name: my-test-app
ports:
- containerPort: 3002
resources: {}
tty: true
volumeMounts:
- mountPath: /usr/src/app
name: my-test-app-claim0
restartPolicy: Always
volumes:
- name: my-test-app-claim0
persistentVolumeClaim:
claimName: my-test-app-claim0
status: {}
As you can see in the args section of my yaml i am listing all the files in my directory /usr/src/app However it logs do the only file that appears is a single package-lock.json file which causes the following install command to fail. This error however does not occur when i use docker-compose to launch my app so for some reason only my kubernetes is having trouble. Also i can confirm that my image does contain a package.json file by running an interactive shell. I'm unsure on how to proceed so any help would be appreciated!
You are mounting something else over /usr/src/app where package.json is supposed to be located. That hides all the files in there. Remove the volumes and volumeMounts sections.

Resources