I'm new to Kubernetes. I'm making my first ever attempt to deploy an application to Kubernetes and expose it to the public. However, when I try and deploy my configuration, I get this error:
error: unable to recognize "deployment.yml": no matches for kind "Service" in version "apps/v1"
So, let's run through the details.
I'm on Ubuntu 18.04. I'm using MiniKube with VirtualBox as the HyperVisor driver. Here is all the version info:
MiniKube = v1.11.0
VirtualBox = 6.1.0
Kubectl = Client Version 1.18.3, Server Version 1.18.3
The app I'm trying to deploy is a super-simple express.js app that returns Hello World on request.
const express = require('express');
const app = express();
app.get('/hello', (req, res) => res.send('Hello World'));
app.listen(3000, () => console.log('Running'));
I have a build script I've used for deploying express apps to docker before that zips up all the source files. Then I've got my Dockerfile:
FROM node:12.16.1
WORKDIR /usr/src/app
COPY ./build/TestServer-*.zip ./TestServer.zip
RUN unzip TestServer.zip
RUN yarn
CMD ["yarn", "start"]
So now I run some commands. eval $(minikube docker-env) makes me use MiniKube's docker environment so I don't need to deploy this container to the cloud. docker build -t testserver:v1 . builds and tags the container.
Now, let's go to my deployment.yml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: testserver
spec:
replicas: 1
selector:
matchLabels:
app: testserver
template:
metadata:
labels:
app: testserver
spec:
containers:
- name: testserver
image: testserver:v1
ports:
- containerPort: 3000
env:
imagePullPolicy: Never
---
apiVersion: apps/v1
kind: Service
metadata:
name: testserver
spec:
selector:
app: testserver
ports:
- port: 80
targetPort: 3000
type: LoadBalancer
I'm trying to create a deployment with a pod and a service to expose it. I'm sure there are various issues in here, this is the newest part to me and I'm still trying to learn and understand the spec. However, the problem I'm asking for help with occurs when I try to use this config. I run the create command, and get the error.
kubectl create -f deployment.yml
deployment.apps/testserver created
error: unable to recognize "deployment.yml": no matches for kind "Service" in version "apps/v1"
The result of this is I see my app listed as a deployment and as a pod, but the service part has failed. I've been scouring the internet for documentation on why this is happening, but I've got nothing.
A service is of apiVersion: v1 instead of apiVersion: apps/v1 (like a deployment). You can check it in the official docs. You also need to use a Service of type NodePort (or ClusterIP) if you want to expose your deployment. Type LoadBalancer will not work in minikube. This is mostly used in k8s clusters managed in the cloud where a service of type LoadBalancer will create a loadbalancer (like an ALB in AWS).
To check the apigroup of a resource you can use: kubectl api-resources
Related
i have a question about kubernetes networking.
My working senario:
I have a Jenkins container my localhost and this container up and running. Inside Jenkins, i have a job. Access jenkins , i use "http://localhost:8080" url. (jenkins is not runing inside kubernetes)
My flask app, trigger the Jenkins job with this command:
#app.route("/create",methods=["GET","POST"])
def create():
if request.method =="POST":
dosya_adi=request.form["sendmail"]
server = jenkins.Jenkins('http://localhost:8080/', username='my-user-name', password='my-password')
server.build_job('jenkins_openvpn', {'FILE_NAME': dosya_adi}, token='my-token')
Then, i did Dockerize this flask app. My image name is: "jenkins-app"
If i run this command, everythings perfect:
docker run -it --network="host" --name=jenkins-app jenkins-app
But i want to do samething with kubernetes. For that i wrote this yml file.
apiVersion: v1
kind: Pod
metadata:
name: jenkins-pod
spec:
hostNetwork: true
containers:
- name: jenkins-app
image: jenkins-app:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
With this yml file, i access the flask app using port 5000. While i want to trigger jenkins job, i get an error like this: requests.exceptions.ConnectionError
Would you suggest if there is a way to do this with Kubernetes?
I create an endpoint.yml file and add in this file below commands, this solve my problem:
apiVersion: v1
kind: Endpoints
metadata:
name: jenkins-server
subsets:
- addresses:
- ip: my-ps-ip
ports:
- port: 8080
Then, I change this line in my flask app like this:
server = jenkins.Jenkins('http://my-ps-ip:8080/', username='my-user-name', password='my-password')
First you expose your Jenkins server:
kubectl expose pod jenkins-pod --port=8080 --target-port 5000
Then you check the existence of the service:
kubectl get svc jenkins-pod -o yaml
Use it for your Flask app to connect to your Jenkins server via this service:
... jenkins.Jenkins('http://jenkins-pod.default.svc.cluster.local:8080/...'
Note the assumption is you run everything in default namespace, otherwise change the "default" to your namespace.
A colleague created a K8s cluster for me. I can run services in that cluster without any problem. However, I cannot run services that depend on an image from Amazon ECR, which I really do not understand. Probably, I made a small mistake in my deployment file and thus caused this problem.
Here is my deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
Here is my service file:
apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello
spec:
type: NodePort
ports:
- port: 5000
nodePort: 30002
protocol: TCP
selector:
app: hello
On the master node, I have run this to ensure kubernetes knows about the deployment and the service.
kubectl create -f dep.yml
kubectl create -f service.yml
I used the K8s extension in vscode to check the logs of my pods.
This is the error I get:
Error from server (BadRequest): container "hello" in pod
"hello-deployment-xxxx-49pbs" is waiting to start: trying and failing
to pull image.
Apparently, pulling is an issue..... This is not happening when using a public image from the public docker hub. Logically, this would be a rights issue. But looks like it is not. I get no error message when running this command on the master node:
docker pull xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
This command just pulls my image.
I am confused now. I can pull my image with docker pull on the master node . But K8s fails doing the pull. Am I missing something in my deployment file? Some property that says: "repositoryIsPrivateButDoNotComplain"? I just do not get it.
How to fix this so K8s can easily use my image from Amazon ECR?
You should create and use secretes for the ECR authorization.
This is what you need to do.
Create a secrete for the Kubernetes cluster, execute the below-given shell script from a machine from where you can access the AWS account in which ECR registry is hosted. Please change the placeholders as per your setup. Please ensure that the machine on which you execute this shell script should have aws cli installed and aws credential configured. If you are using a windows machine then execute this script in Cygwin or git bash console.
#!/bin/bash
ACCOUNT=<AWS_ACCOUNT_ID>
REGION=<REGION>
SECRET_NAME=<SECRETE_NAME>
EMAIL=<SOME_DUMMY_EMAIL>
TOKEN=`/usr/local/bin/aws ecr --region=$REGION --profile <AWS_PROFILE> get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`
kubectl delete secret --ignore-not-found $SECRET_NAME
kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
Change the deployment and add a section for secrete which you're pods will be using while downloading the image from ECR.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: xxxxxxxxx.yyy.ecr.eu-zzzzz.amazonaws.com/test:latest
ports:
- containerPort: 5000
imagePullSecrets:
- name: SECRET_NAME
Create the pods and service.
IF it succeeds, then still the secret will expire in 12 hours, to overcome that setup a crone ( for recreating the secretes on the Kubernetes cluster periodically. For setting up crone use the same script which is given above.
For the complete picture of how it is happening under the hood please refer to below diagram.
Regards
Amit Meena
For 12 Hour problem, If you are using Kubernetes 1.20, Please configure and use Kubelet image credential provider
https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/
You need to enable alpha feature gate KubeletCredentialProviders in your kubelet
If using Lower Kubernetes Version and this feature is not available then use https://medium.com/#damitj07/how-to-configure-and-use-aws-ecr-with-kubernetes-rancher2-0-6144c626d42c
I here for hours every day, reading and learning, but this is my first question, so bear with me.
I'm simply trying to get my Kubernetes cluster to start up.
Below is my skaffold.yaml file in the root of the project:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: omesadev/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
Below is my auth-depl.yaml file in the infra/k8s/ directory:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: omesadev/auth
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
Below is the error message I'm receiving in the cli:
exiting dev mode because first deploy failed: unable to connect to Kubernetes: getting client config for Kubernetes client: error creating REST client config for kubeContext "": invalid configuration: [unable to read client-cert C:\Users\omesa\.minikube\profiles\minikube\client.crt for minikube due to open C:\Users\omesa\.minikube\profiles\minikube\client.crt: The system cannot find the path specified., unable to read client-key C:\Users\omesa\.minikube\profiles\minikube\client.key for minikube due to open C:\Users\omesa\.minikube\profiles\minikube\client.key: The system cannot find the path specified., unable to read certificate-authority C:\Users\omesa\.minikube\ca.crt for minikube due to open C:\Users\omesa\.minikube\ca.crt: The system cannot find the file specified.
I've tried to install kubernetes, minikube, and kubectl. I've added them to the path and removed them a few times in different ways because I thought my configuration or usage could have been incorrect.
Then, I read that if I'm using the Docker GUI that Kubernetes should be running in that, so I checked the settings in the Docker GUI to ensure Kubernetes was running through Docker and it is.
I have Hyper-V set up. I've used it in the past successfully with Docker and with Virtualbox, so I know my Hyper-V is not the issue.
I've also attached an image of my file directory, but I'm pretty sure everything is good to go here too.
src tree
Thanks in advance!
Enable Kubernetes!
The reason why you are getting is that Kubernetes is not enabled.
Posting #Jim solution from comments as community wiki for better visibility:
The problem was, I had two different contexts inside of my kubectl
config and the project I was trying to launch was using the wrong
cluster/context. I don't know how the minikube cluster and context
were created, but I deleted them and set the new context to
docker-desktop with "kubectl config use-context docker-desktop"
Helpful links:
Organizing Cluster Access Using kubeconfig Files
Configure Access to Multiple Clusters
I am trying to build ci/cd locally with jenkins and minikube.
I run minikube on my machine (host) with docker driver, and run jenkins in a container too.
Both on the same docker network.
To run kubectl commands inside a jenkins pipeline I need to
access the minikube from my container that is running jenkins.
I've tried to use the container name as a host but it didn't work.
I'm out of ideas for attempts can someone help me?
Went in to same issue: cannot access $(minikube ip) from external docker container while access from host machine is fine.
running the docker container with --network host option solved the issue.
Running kubectl commands from a pod (container) is possible and simple to achieve. Although it's more practical and recommended to use Kubernetes API instead.
For both of them you are required to give the right permissions to your pods so they can authenticate to be able to make k8s API calls (kubectl is just an application that talks to your cluster through the API).
Here is a good example by mster:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: k8s-101
spec:
replicas: 3
template:
metadata:
labels:
app: k8s-101
spec:
serviceAccountName: k8s-101-role
containers:
- name: k8s-101
imagePullPolicy: Always
image: yourrepo/yourcontainer
ports:
- name: app
containerPort: 3000
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-101-role
subjects:
- kind: ServiceAccount
name: k8s-101-role
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-101-role
Here we are giving cluster-role rights to the Deployment Pods and consider it as a bad example as it's dangerous, it exposes your cluster.
Next you have to prepare your containers to have kubectl built in:
Download & Build kubectl inside the container
Build your application, copying kubectl to your container
Voila! kubectl provides a rich cli for managing your kubernetes cluster
If you prefer to talk directly to the API, you don't need to do anything else. Just go to the documentation to understand how to make calls, and also check Access Clusters Using the Kubernetes API.
Heres image of my Kubernetes services.
Todo-front-2 is working instance of my app, which I deployed with command line:
kubectl run todo-front --image=todo-front:v7 --image-pull-policy=Never
kubectl expose deployment todo-front --type=NodePort --port=3000
And it's working great. Now I want to move on and use todo-front.yaml file to deploy and expose my service. Todo-front service refers to my current try on it. My deployment file looks like this:
kind: Deployment
apiVersion: apps/v1
metadata:
name: todo-front
spec:
replicas: 1
selector:
matchLabels:
app: todo-front
template:
metadata:
labels:
app: todo-front
spec:
containers:
- name: todo-front
image: todo-front:v7
env:
- name: REACT_APP_API_ROOT
value: "http://localhost:12000"
imagePullPolicy: Never
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: todo-front
spec:
type: NodePort
ports:
- port: 3000
targetPort: 3000
selector:
app: todo-front
I deploy it using:
kubectl apply -f deployment/todo-front.yaml
Here is the output
But when I run
minikube service todo-front
It redirects me to URL saying "Site can't be reached".
I can't figure out what I'm doing wrong. Ports should be ok, and my cluster should be ok since I can get it working by only using command-line without external YAML files. Both deployments are also using the same docker-image. I have also tried changing all ports now "3000" to something different, in case they clash with existing deployment todo-front-2, no luck.
Here is also a screenshot of pods and their status:
Anyone with more experience with Kube and Docker cares to take a look? Thank you!
You can run below commands to generate the yaml files without applying it to the cluster and then compare it with the yamls you manually created and see if there is a mismatch. Also instead of creating yamls manually yourself you can apply the generated yamls itself.
kubectl run todo-front --image=todo-back:v7 --image-pull-policy=Never --dry-run -o yaml > todo-front.yaml
kubectl expose deployment todo-front --type=NodePort --port=3000 --dry-run -o yaml > todo-depoloyment.yaml