I've been following tutorial videos and trying to understand to build a small minimalistic application. The videos I followed are pulling containers from the registries while I'm trying to test, build and deploy everything locally at the moment if possible. Here's my setup.
I've the latest docker installed with Kubernetes enabled on mac OS.
A helloworld NodeJS application running with Docker and Docker Compose
TODO: I'd like to be able to start my instances, let's say 3 in the kubernetes cluster
Dockerfile
FROM node:alpine
COPY package.json package.json
RUN npm install
COPY . .
CMD ["npm", "start"]
docker-compose.yml
version: '3'
services:
user:
container_name: users
build:
context: ./user
dockerfile: Dockerfile
Creating a deployment file with the help of this tutorial and it may have problems since I'm merging information both from youtube as well as the web link.
Creating a miminalistic yml file for to be able to get up and running, will study other aspects like readiness and liveness later.
apiVersion: v1
kind: Service
metadata:
name: user
spec:
selector:
app: user
ports:
- port: 8080
type: NodePort
Please review the above yml file for correctness, so the question is what do I do next?
The snippets you provide are regrettably insufficient but you have the basics.
I had a Google for you for a tutorial and -- unfortunately -- nothing obvious jumped out. That doesn't mean that there isn't one, just that I didn't find it.
You've got the right idea and there are quite a few levels of technology to understand but, I commend your approach and think we can get you there.
Let's start with a helloworld Node.JS tutorial
https://nodejs.org/en/docs/guides/getting-started-guide/
Then you want to containerize this
https://nodejs.org/de/docs/guides/nodejs-docker-webapp/
For #3 below, the last step here is:
docker build --tag=<your username>/node-web-app .
But, because you're using Kubernetes, you'll want to push this image to a public repo. This is so that, regardless of where your cluster runs, it will be able to access the container image.
Since the example uses DockerHub, let's continue using that:
docker push <your username>/node-web-app
NB There's an implicit https://docker.io/<your username>/node-web-app:latest here
Then you'll need a Kubernetes cluster into which you can deploy your app
I think microk8s is excellent
I'm a former Googler but Kubernetes Engine is the benchmark (requires $$$)
Big fan of DigitalOcean too and it has Kubernetes (also $$$)
My advice is (except microk8s and minikube) don't ever run your own Kubernetes clusters; leave it to a cloud provider.
Now that you have all the pieces, I recommend you just:
kubectl run yourapp \
--image=<your username>/node-web-app:latest \
--port=8080 \
--replicas=1
I believe kubectl run is deprecated but use it anyway. It will create a Kubernetes Deployment (!) for you with 1 Pod (==replica). Feel free to adjust that value (perhaps --replicas=2) if you wish.
Once you've created a Deployment, you'll want to create a Service to make your app accessible (top of my head) this command is:
kubectl expose deployment/yourapp --type=NodePort
Now you can query the service:
kubectl get services/yourapp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
yourapp NodePort 10.152.183.27 <none> 80:32261/TCP 7s
NB The NodePort that's been assigned (in this case!) is :32261 and so I can then interact with the app using curl http://localhost:32261 (localhost because I'm using microk8s).
kubectl is powerful. Another way to determine the NodePort is:
kubectl get service/yourapp \
--output=jsonpath="{.spec.ports[0].nodePort}"
The advantage of the approach of starting from kubectl run is you can then easily determine the Kubernetes configuration that is needed to recreate this Deployment|Service by:
kubectl get deployment/yourapp \
--format=yaml \
> ./yourapp.deployment.yaml
kubectl get service/yourapp \
--format=yaml \
> ./yourapp.service.yaml
These commands will interrogate the cluster, retrieve the configuration for you and pump it into the files. It will include some instance data too but the gist of it shows you what you would need to recreate the deployment. You will need to edit this file.
But, you can test this by first deleting the deployment and the service and then recreating it from the configuration:
kubectl delete deployment/yourapp
kubectl delete service/yourapp
kubectl apply --filename=./yourapp.deployment.yaml
kubectl apply --filename=./yourapp.service.yaml
NB You'll often see multiple resource configurations merged into a single YAML file. This is perfectly valid YAML but you only ever see it used by Kubernetes. The format is:
...
some: yaml
---
...
some: yaml
---
Using this you could merge the yourapp.deployment.yaml and yourapp.service.yaml into a single Kubernetes configuration.
Related
I am new to containerization and am having some difficulties. I have an application that consists of a React frontend, a Python backend using FastAPI, and PostgreSQL databases using SQL Alchemy for object-relational mapping. I decided to put each component inside a Docker container so that I can deploy the application on Azure in the future (I know that some people may have strong opinions on deploying the frontend and database in containers, but I am doing so because it is required by the project's requirements).
After doing this, I started working with Minikube. However, I am having problems where all the containers that should be running inside pods have the status "CrashLoopBackOff". From what I can tell, this means that the images are being pulled from Docker Hub, containers are being started but then failing for some reason.
I tried running "kubectl logs" and nothing is returned. The "kubectl describe" command, in the Events section, returns: "Warning BackOff 30s (x140 over 30m) kubelet Back-off restarting failed container."
I have also tried to minimize the complexity by just trying to run the frontend component. Here are my Dockerfile and manifest file:
Dockerfile:
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]
manifest file .yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: xxxtest/my-first-repo:yyy-frontend
ports:
- containerPort: 3000
I do not have a service manifest yet, and I don't think it is related to this issue.
Can anyone provide any help or tips on how to troubleshoot this issue? I would really appreciate any guidance you can offer. Thank you in advance for your time and assistance!
Have a great day!
This CrashLoopBackOff is related to a container error. If you want to fix this error, you need to see the container log, these is my tips:
The best practice in K8s is to redirect the application logs to /dev/stdout or /dev/stderr is not recommended redirect to a file so that you can use the kubectl logs <POD NAME>.
Try to clear the cache of your local containers, download and run the same image, and tag you configured in your deployment file.
If you need any environment variable to run the container locally, you'll also need those env's in your deployment file.
Always use the flag imagePullPolicy: Always mainly if you are using the same image tag. EDIT: Because the default image pull policy is IfNotPresent, if you fixed the container image, the k8s will not pull a new image version.
Docs:
ImagePullPolicy: https://kubernetes.io/docs/concepts/containers/images/
Standard Output: https://kubernetes.io/docs/concepts/cluster-administration/logging/
I am trying to change my existing deployment logic/switch to kubernetes (My server is in gcp and till now I used docker-compose to run my server.) So I decided to start by using kompose and generating services/deployments using my existing docker-compose file. After running
kompose --file docker-compose.yml convert
#I got warnings indicating Volume mount on the host "mypath" isn't supported - ignoring path on the host
After a little research I decided to use the command below to "fix" the issue
kompose convert --volumes hostPath
And what this command achieved is -> It replaced the persistent volume claims that were generated with the first command to the code below.
volumeMounts:
- mountPath: /path
name: certbot-hostpath0
- mountPath: /somepath
name: certbot-hostpath1
- mountPath: /someotherpath
name: certbot-hostpath2
- hostPath:
path: /path/certbot
name: certbot-hostpath0
- hostPath:
path: /path/cert_challenge
name: certbot-hostpath1
- hostPath:
path: /path/certs
name: certbot-hostpath2
But since I am working in my local machine
kubectl apply -f <output file>
results in The connection to the server localhost:8080 was refused - did you specify the right host or port?
I didn't want to connect my local env with gcp just to generate the necessary files, is this a must? Or can I move this to startup-gcp etc
I feel like I am in the right direction but I need a confirmation that I am not messing something up.
1)I have only one compute engine(VM instance) and lots of data in my prod db. "How do I"/"do I need to" make sure I don't lose any data in db by doing something?
2)In startup-gcp after doing everything else (pruning docker images etc) I had a docker run command that makes use of docker/compose 1.13.0 up -d. How should I change it to switch to kubernetes?
3)Should I change anything in nginx.conf as it referenced to 2 different services in my docker-compose (I don't think I should since same services also exist in kubernetes generated yamls)
You should consider using Persistent Volume Claims (PVCs). If your cluster is managed, in most cases it can automatically create the PersistentVolumes for you.
One way to create the Persistent Volume Claims corresponding to your docker compose files is using Move2Kube(https://github.com/konveyor/move2kube). You can download the release and place it in path and run :
move2kube translate -s <path to your docker compose files>
It will then interactively allow you configure PVCs.
If you had a specific cluster you are targeting, you can get the specific storage classes supported by that cluster using the below command in a terminal where you have set your kubernetes cluster as context for kubectl.
move2kube collect
Once you do collect, you will have a m2k_collect folder, which you can then place it in the folder where your docker compose files are. And when you run move2kube translate it will automatically ask you whether to target this specific cluster, and also option to choose the storage class from that cluster.
1)I have only one compute engine(VM instance) and lots of data in my
prod db. "How do I"/"do I need to" make sure I don't lose any data in
db by doing something?
Once the PVC is provisioned you can copy the data to the PVC by using kubectl cp command into a pod where the pvc is mounted.
2)In startup-gcp after doing everything else (pruning docker images
etc) I had a docker run command that makes use of docker/compose
1.13.0 up -d. How should I change it to switch to kubernetes?
You can potentially change it to use helm chart. Move2Kube, during the interactive session, can help you create helm chart too. Once you have the helm chart, all you have to do is "helm upgrade -i
3)Should I change anything in nginx.conf as it referenced to 2
different services in my docker-compose (I don't think I should since
same services also exist in kubernetes generated yamls)
If the services names are name in most cases it should work.
I'm currently migrating a legacy server to Kubernetes, and I found that kubectl or dashboard only shows the latest log file, not the older versions. In order to access the old files, I have to ssh to the node machine and search for it.
In addition to being a hassle, my team wants to restrict access to the node machines themselves, because they will be running pods from many different teams and unrestricted access could be a security issue.
So my question is: can I configure Kubernetes (or a Docker image) so that these old (rotated) log files are stored in some directory accessible from inside the pod itself?
Of course, in a pinch, I could probably just execute something like run_server.sh | tee /var/log/my-own.log when the pod starts... but then, to do it correctly, I'll have to add the whole logfile rotation functionality, basically duplicating what Kubernetes is already doing.
So there are a couple of ways to and scenarios for this. If you are just interested in the log of the same pod from before last restart, you can use the --previous flag to look at logs:
kubectl logs -f <pod-name-xyz> --previous
But since in your case, you are interested in looking at log files beyond one rotation, here is how you can do it. Add a sidecar container to your application container:
volumeMounts:
- name: varlog
mountPath: /tmp/logs
- name: log-helper
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/*.log']
volumeMounts:
- name: varlog
mountPath: /tmp/logs
volumes:
- name: varlog
hpostPath: /var/log
This will allow the directory which has all logs from /var/log directory from host to /tmp/log inside the container and the command will ensure that content of all files is flushed. Now you can run:
kubectl logs <pod-name-abc> -c count-log-1
This solution does away with SSH access, but still needs access to kubectl and adding a sidecar container. I still think this is a bad solution and you consider of one of the options from the cluster level logging architecture documentation of Kubernetes such as 1 or 2
What is the best way to change the source code of my application running as Kubernetes pod without creating a new version of image so I can avoid time taken for pushing and pulling image from repository?
You may enter the container using bash if it installed on the image and modify it using -
docker exec -it <CONTAINERID> /bin/bash
However, this isn’t advisable solution. If your modifications succeed, you should update the Dockerfile accordingly or else you risk losing your work and ability to share it with others.
Have the container pull from git on creation?
Setup CI/CD?
Another way to achieve a similar result is to leave the application source outside of the container and mount the application source folder in the container.
This is especially useful when developing web applications in environments such as PHP: your container is setup with your Apache/PHP stack and /var/www/html is configured to mount your local filesystem.
If you are using minikube, it already mounts a host folder within the minikube VM. You can find the exact paths mounted, depending on your setup, here:
https://kubernetes.io/docs/getting-started-guides/minikube/#mounted-host-folders
Putting it all together, this is what a nginx deployment would look like on kubernetes, mounting a local folder containing the web site being displayed:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: sources
readOnly: true
volumes:
- name: sources
hostPath:
path: /Users/<username>/<source_folder>
type: Directory
Finally we have resolved the issue. Here, we changed our image repository from docker hub to aws ecr in the same region where we are running kubernetes cluster. Now, it is taking very lesstime for pushing/pulling images.
This is definitely not recommended for production.
But if your intention is local development with kubernetes, take a look at these tools:
Telepresence
Telepresence is an open source tool that lets you run a single service
locally, while connecting that service to a remote Kubernetes cluster.
Kubectl warp
Warp is a kubectl plugin that allows you to execute your local code
directly in Kubernetes without slow image build process.
The kubectl warp command runs your command inside a container, the same
way as kubectl run does, but before executing the command, it
synchronizes all your files into the container.
I think it should be taken as process to create new images for each deployment.
Few benefits:
immutable images: no intervention in running instance this will ensure image run in any environment
rollback: if you encounter issues in new version, rollback to previous version
dependencies: new versions may have new dependencies
I have been digging through the Kubernetes documentation for hours. I understand the core design, and the notion of services, controllers, pods, etc.
What I don't understand, however, is the process in which I can declaratively configure the cluster. That is, a way for me to write a config file (or a set thereof) to define the makeup, and scaling options of the cloud deployment. I want to be able to declare which containers I want in which pods, how they will communicate, how they will scale, etc. without running a ton of cli commands.
Is there docker-compose functionality for Kubernetes?
I want my application to be defined in git—to be version controlled–without relying on manual cli interactions.
Is this possible to do in a concise way? Is there a reference that is more clear than the official documentation?
If you're still looking, maybe this tool can help: https://github.com/kelseyhightower/compose2kube
You can create a compose file:
# sample compose file with 3 services
web:
image: nginx
ports:
- "80"
- "443"
database:
image: postgres
ports:
- "5432"
cache:
image: memcached
ports:
- "11211"
Then use the tool to convert it to kubernetes objects:
compose2kube -compose-file docker-compose.yml -output-dir output
Which will create these files:
output/cache-rc.yaml
output/database-rc.yaml
output/web-rc.yaml
Then you can use kubectl to apply them to kubernetes.
If you have existing Docker Composer files, you may take a look at the Kompose project.
kompose is a tool to help users who are familiar with docker-compose move to Kubernetes. kompose takes a Docker Compose file and translates it into Kubernetes resources.
kompose is a convenience tool to go from local Docker development to managing your application with Kubernetes. Transformation of the Docker Compose format to Kubernetes resources manifest may not be exact, but it helps tremendously when first deploying an application on Kubernetes.
To run docker-compose.yaml file or your own, run:
kompose up
To convert docker-compose.yaml into Kubernetes deployments and services with one simple command:
$ kompose convert -f docker-compose.yaml
INFO Kubernetes file "frontend-service.yaml" created
INFO Kubernetes file "redis-master-service.yaml" created
INFO Kubernetes file "redis-slave-service.yaml" created
INFO Kubernetes file "frontend-deployment.yaml" created
INFO Kubernetes file "redis-master-deployment.yaml" created
INFO Kubernetes file "redis-slave-deployment.yaml" created
For more info, check: http://kompose.io/
Docker has officially announced the docker-compose functionality for the kubernetes cluster. So from now on you can compose the kubernetes resources in a file and apply them using that single file.
First we need to install the Compose on Kubernetes controller into your Kubernetes cluster. This controller uses the standard Kubernetes extension points to introduce the Stack to the Kubernetes API. Check the full documentation to install the docker compose controller:
https://github.com/docker/compose-on-kubernetes
Let's write a simple compose yaml file:
version: "3.7"
services:
web:
image: dockerdemos/lab-web
ports:
- "33000:80"
words:
image: dockerdemos/lab-words
deploy:
replicas: 3
endpoint_mode: dnsrr
db:
image: dockerdemos/lab-db
We’ll then use the docker client to deploy this to a Kubernetes cluster running the controller:
$ docker stack deploy --orchestrator=kubernetes -c docker-compose.yml words
Waiting for the stack to be stable and running...
db: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
web: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
words: Ready [pod status: 1/3 ready, 2/3 pending, 0/3 failed]
Stack words is stable and running
We can then interact with those objects via the Kubernetes API. Here you can see we’ve created the lower-level objects like Services, Pods, Deployments and ReplicaSets automatically:
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/db 1 1 1 1 57s
deployment.apps/web 1 1 1 1 57s
deployment.apps/words 3 3 3 3 57s
It’s important to note that this isn’t a one-time conversion. The Compose on Kubernetes API Server introduces the Stack resource to the Kubernetes API. So we can query and manage everything at the same level of abstraction as we’re building the application. That makes delving into the details above useful for understanding how things work, or debugging issues, but not required most of the time:
$ kubectl get stack
NAME STATUS PUBLISHED PORTS PODS AGE
words Running 33000 5/5 4m
Kubernetes certainly has its own yaml (as shown in "Deploying Applications")
But as "Docker Clustering Tools Compared: Kubernetes vs Docker Swarm", it was not written (just) for Docker, and it has its own system.
You could use docker-compose to start Kubernetes though, as shown in "vyshane/kid": that does mask some of the kubectl commands cli in scripts (which can be versioned).