Kubernetes: Error from server (NotFound): deployments.apps "kube-verify" not found - docker

I set up a Kubernetes cluster in my private network and managed to deploy a test pods:
now I want to expose an external ip for the service:
but when I run:
kubectl get deployments kube-verify
i get:
Error from server (NotFound): deployments.apps "kube-verify" not found
EDIT
Ok I try a new approach:
i have made a namespace called: verify-cluster
My deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: verify-cluster
namespace: verify-cluster
labels:
app: verify-cluster
spec:
replicas: 1
selector:
matchLabels:
app: verify-cluster
template:
metadata:
labels:
app: verify-cluster
spec:
containers:
- name: nginx
image: nginx:1.18.0
ports:
- containerPort: 80
and service.yaml:
apiVersion: v1
kind: Service
metadata:
name: verify-cluster
namespace: verify-cluster
spec:
type: NodePort
selector:
app: verify-cluster
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30007
then I run:
kubectl create -f deployment.yaml
kubectl create -f service.yaml
than checking
kubectl get all -n verify-cluster
but than I want to check deployment with:
kubectl get all -n verify-cluster
and get:
Error from server (NotFound): deployments.apps "verify-cluster" not found
hope that's better for reproduction ?
EDIT 2
when I deploy it to default namespace it runs directly so the issue must be something in the namespace

I guess that you might have forgotten to create the namespace:
File my-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: <insert-namespace-name-here>
Then:
kubectl create -f ./my-namespace.yaml

First you need to get the deployment by
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
test-deployment 1/1 1 1 15m
If you have used any namespace then
$ kubectl get deployment -n your-namesapce
Then use the exact name in further commands example
kubectl scale deployment test-deployment --replicas=10

I have replicated the use case with the config files you have provided. Everything works well at my end. Make sure that namespace is created correctly without any typo errors.
Alternatively, you can create namespace using below command:
kubectl create namespace <insert-namespace-name-here>
Refer this documentation for detailed information on creating a namespace.

Another approach could be to apply your configuration directly to the requested namespace.
kubectl apply -f deployment.yml -n verify-cluster
kubectl apply -f service.yml -n verify-cluster

Related

Cannot pull a private package/image from GitHub Container Registry into Okteto Kubernetes

I hope it's ok to ask for your advice.
The problem in a nutshell: my pipeline cannot pull private images from GHCR.IO into Okteto Kubernetes, but public images from the same private repo work.
I'm on Windows 10 and use WSL2-Ubuntu 20.04 LTS with kinD for development and tried minikube too.
I get an error in Okteto which says that the image pull is “unauthorized” -> “imagePullBackOff”.
Things I did:browsed Stack Overflow, RTFM, Okteto FAQ, download the Okteto kubeconfig, pulled my hair out and spent more hours than I would like to admit – still no success yet.
For whatever reason I cannot create a “kubectl secret” that works. When logged-in to ghcr.io via “docker login --username” I can pull private images locally.
No matter what I’ve tried I still get the error “unauthorized” when trying to pull a private image in Okteto.
My Setup with latest updates:
Windows 10 Pro
JetBrains Rider IDE
WSL2-Ubuntu 20.04 LTS
ASP.NET Core MVC app
.NET 6 SDK
Docker
kinD
minikube
Chocolatey
Homebrew
Setup kinD
kind create cluster --name my-name
kubectl create my-namespace
// create a secret to pull images from ghcr.io
kubectl create secret docker-registry my-secret -n my-namespace --docker-username="my-username" --docker-password="my-password" --docker-email="my-email" --docker-server="https://ghcr.io"
// patch local service account
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "my-secret"}]}'
kubernetes.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: okteto-repo
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: okteto-repo
template:
metadata:
labels:
app: okteto-repo
spec:
containers:
- name: okteto-repo
image: ghcr.io/user/okteto-repo:latest
ports:
- containerPort: 80
imagePullSecrets:
- name: my-secret
---
apiVersion: v1
kind: Service
metadata:
name: okteto-repo
annotations:
dev.okteto.com/auto-ingress: "true"
spec:
type: ClusterIP
selector:
app: okteto-repo
ports:
- protocol: TCP
port: 8080
targetPort: 80
Do you have an idea why it doesn't work and what I could do?
Thanks a lot my dear friends, every input is highly appreciated!
Hope you guys have great holidays.
Cheers,
Michael
I was able to pull a private image by doing the following:
Create a personal token in GitHub with repo access.
Build and push the image to GitHub's Container registry (I used okteto build -t ghcr.io/rberrelleza/go-getting-started:0.0.1)
Download my kubeconfig credentials from Okteto Cloud by running okteto context update-kubeconfig.
Create a secret with my credentials: kubectl create secret docker-registry gh-regcred --docker-server=ghcr.io --docker-username=rberrelleza --docker-password=ghp_XXXXXX
Patched the default account to include the secret as an image pull secret: kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "gh-regcred"}]}'
Updated the image name in the kubernetes manifest
Created the deployment (kubectl apply -f k8s.yaml)
These is what my kubernetes resources looks like, in case it helps:
# k8s.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: ghcr.io/rberrelleza/go-getting-started:0.0.1
name: hello-world
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
annotations:
dev.okteto.com/auto-ingress: "true"
spec:
type: ClusterIP
ports:
- name: "hello-world"
port: 8080
selector:
app: hello-world
# default SA
apiVersion: v1
imagePullSecrets:
- name: gh-regcred
- name: okteto-regcred
kind: ServiceAccount
metadata:
creationTimestamp: "2021-05-21T22:26:38Z"
name: default
namespace: rberrelleza
resourceVersion: "405042662"
uid: 2b6a6eef-2ce7-40d3-841a-c0a5497279f7
secrets:
- name: default-token-7tm42

Minikube services work when run from command line, but applying through YAML doesn't work

Heres image of my Kubernetes services.
Todo-front-2 is working instance of my app, which I deployed with command line:
kubectl run todo-front --image=todo-front:v7 --image-pull-policy=Never
kubectl expose deployment todo-front --type=NodePort --port=3000
And it's working great. Now I want to move on and use todo-front.yaml file to deploy and expose my service. Todo-front service refers to my current try on it. My deployment file looks like this:
kind: Deployment
apiVersion: apps/v1
metadata:
name: todo-front
spec:
replicas: 1
selector:
matchLabels:
app: todo-front
template:
metadata:
labels:
app: todo-front
spec:
containers:
- name: todo-front
image: todo-front:v7
env:
- name: REACT_APP_API_ROOT
value: "http://localhost:12000"
imagePullPolicy: Never
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: todo-front
spec:
type: NodePort
ports:
- port: 3000
targetPort: 3000
selector:
app: todo-front
I deploy it using:
kubectl apply -f deployment/todo-front.yaml
Here is the output
But when I run
minikube service todo-front
It redirects me to URL saying "Site can't be reached".
I can't figure out what I'm doing wrong. Ports should be ok, and my cluster should be ok since I can get it working by only using command-line without external YAML files. Both deployments are also using the same docker-image. I have also tried changing all ports now "3000" to something different, in case they clash with existing deployment todo-front-2, no luck.
Here is also a screenshot of pods and their status:
Anyone with more experience with Kube and Docker cares to take a look? Thank you!
You can run below commands to generate the yaml files without applying it to the cluster and then compare it with the yamls you manually created and see if there is a mismatch. Also instead of creating yamls manually yourself you can apply the generated yamls itself.
kubectl run todo-front --image=todo-back:v7 --image-pull-policy=Never --dry-run -o yaml > todo-front.yaml
kubectl expose deployment todo-front --type=NodePort --port=3000 --dry-run -o yaml > todo-depoloyment.yaml

Does kubernetes kubectl run with image creates deployment yaml file

I am trying to use Minikube and Docker to understand the concepts of Kubernetes architecture.
I created a spring boot application with Dockerfile, created tag and pushed to Dockerhub.
In order to deploy the image in K8s cluster, i issued the below command,
# deployed the image
$ kubectl run <deployment-name> --image=<username/imagename>:<version> --port=<port the app runs>
# exposed the port as nodeport
$ kubectl expose deployment <deployment-name> --type=NodePort
Everything worked and i am able to see the 1 pods running kubectl get pods
The Docker image i pushed to Dockerhub didn't had any deployment YAML file.
Below command produced an yaml output
Does kubectl command creates deployment Yaml file out of the box?
$ kubectl get deployments --output yaml
apiVersion: v1
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2019-12-24T14:59:14Z"
generation: 1
labels:
run: hello-service
name: hello-service
namespace: default
resourceVersion: "76195"
selfLink: /apis/apps/v1/namespaces/default/deployments/hello-service
uid: 90950172-1c0b-4b9f-a339-b47569366f4e
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello-service
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-service
spec:
containers:
- image: thirumurthi/hello-service:0.0.1
imagePullPolicy: IfNotPresent
name: hello-service
ports:
- containerPort: 8800
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2019-12-24T14:59:19Z"
lastUpdateTime: "2019-12-24T14:59:19Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2019-12-24T14:59:14Z"
lastUpdateTime: "2019-12-24T14:59:19Z"
message: ReplicaSet "hello-service-75d67cc857" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
I think the easiest way to understand whats going on under the hood when you create kubernetes resources using imperative commands (versus declarative approach by writing and applying yaml definition files) is to run a simple example with 2 additional flags:
--dry-run
and
--output yaml
Names of these flags are rather self-explanatory so I think there is no further need for comment explaining what they do. You can simply try out the below examples and you'll see the effect:
kubectl run nginx-example --image=nginx:latest --port=80 --dry-run --output yaml
As you can see it produces the appropriate yaml manifest without applying it and creating actual deployment:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx-example
name: nginx-example
spec:
replicas: 1
selector:
matchLabels:
run: nginx-example
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx-example
spec:
containers:
- image: nginx:latest
name: nginx-example
ports:
- containerPort: 80
resources: {}
status: {}
Same with expose command:
kubectl expose deployment nginx-example --type=NodePort --dry-run --output yaml
produces the following output:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: nginx-example
name: nginx-example
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx-example
type: NodePort
status:
loadBalancer: {}
And now the coolest part. You can use simple output redirection:
kubectl run nginx-example --image=nginx:latest --port=80 --dry-run --output yaml > nginx-example-deployment.yaml
kubectl expose deployment nginx-example --type=NodePort --dry-run --output yaml > nginx-example-nodeport-service.yaml
to save generated Deployment and NodePort Service definitions so you can further modify them if needed and apply using either kubectl apply -f filename.yaml or kubectl create -f filename.yaml.
Btw. kubectl run and kubectl expose are generator-based commands and as you may have noticed when creating your deployment (as you probably got the message: kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.) they use --generator flag. If you don't specify it explicitly it gets the default value which for kubectl run is --generator=deployment/apps.v1beta1 so by default it creates a Deployment. But you can modify it by providing --generator=run-pod/v1 nginx-example and instead of Deployment it will create a single Pod. When we go back to our previous example it may look like this:
kubectl run --generator=run-pod/v1 nginx-example --image=nginx:latest --port=80 --dry-run --output yaml
I hope this answered your question and clarified a bit the mechanism of creating kubernetes resources using imperative commands.
Yes, kubectl run creates a deployment. If you look at the label field, you can see run: hello-service. This label is used later in the selector.

How to create a Kubernetes ingress to work with SparkJava port 4567?

I have created a Java based web service which utilizes SparkJava. By default this web service binds and listens to port 4567. My company requested this be placed in a Docker container. I created a Dockerfile and created the image, and when I run I expose port 4567...
docker run -d -p 4567:4567 -t myservice
I can invoke my web service for testing my calling a CURL command...
curl -i -X "POST" -H "Content-Type: application/json" -d "{}" "http://localhost:4567/myservice"
... and this is working. My company then says it wants to put this in Amazon EKS Kubernetes so I publish my Docker image to the company's private Dockerhub. I create three yaml files...
deployment.yaml
service.yaml
ingress.yaml
I see my objects are created and I can get a /bin/bash command line to my container running in Kubernetes and from there test localhost access to my service is working correctly including references to external web service resources, so I know my service is good.
I am confused by the ingress. I need to expose a URI to get to my service and I am not sure how this is supposed to work. Many examples show using NGINX, but I am not using NGINX.
Here are my files and what I have tested so far. Any guidance is appreciated.
service.yaml
kind: Service
apiVersion: v1
metadata:
name: my-api-service
spec:
selector:
app: my-api
ports:
- name: main
protocol: TCP
port: 4567
targetPort: 4567
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-api-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api-container
image: hub.mycompany.net/myproject/my-api-service
ports:
- containerPort: 4567
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-api-ingress
spec:
backend:
serviceName: my-api-service
servicePort: 4567
when I run the command ...
kubectl get ingress my-api-ingress
... shows ...
NAME HOSTS ADDRESS PORTS AGE
my-api-ingress * 80 9s
when I run the command ...
kubectl get service my-api-service
... shows ...
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-api-service ClusterIP 172.20.247.225 <none> 4567/TCP 16h
When I run the following command...
kubectl cluster-info
... I see ...
Kubernetes master is running at https://12CA0954AB5F8E1C52C3DD42A3DBE645.yl4.us-east-1.eks.amazonaws.com
As such I try to hit the end point using CURL by issuing...
curl -i -X "POST" -H "Content-Type: application/json" -d "{}" "http://12CA0954AB5F8E1C52C3DD42A3DBE645.yl4.us-east-1.eks.amazonaws.com:4567/myservice"
After some time I receive a time-out error...
curl: (7) Failed to connect to 12CA0954AB5F8E1C52C3DD42A3DBE645.yl4.us-east-1.eks.amazonaws.com port 4567: Operation timed out
I believe my ingress is at fault but I am having difficulties finding non-NGINX examples to compare.
Thoughts?
barrypicker.
Your service should be "type: NodePort"
This example is very similar (however tested in GKE).
kind: Service
apiVersion: v1
metadata:
name: my-api-service
spec:
selector:
app: my-api
ports:
- name: main
protocol: TCP
port: 4567
targetPort: 4567
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-api-deployment
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api-container
image: hashicorp/http-echo:0.2.1
args = ["-listen=:4567", "-text='Hello api'"]
ports:
- containerPort: 4567
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-api-ingress
spec:
backend:
serviceName: my-api-service
servicePort: 4567
in your ingress kubectl get ingress <your ingress> you should see an external ip address.
You can find specific AWS implementation here. In addition more information about exposing services you can find here

Getting error status trying to upload my kubernetes pod

I have my controller.yaml that looks like this:
apiVersion: v1
kind: ReplicationController
metadata:
name: hmrcaction
labels:
name: hmrcaction
spec:
replicas: 1
selector:
name: hmrcaction
template:
metadata:
labels:
name: hmrcaction
version: 0.1.4
spec:
containers:
- name: hmrcaction
image: ccc-docker-docker-release.someartifactory.com/hmrcaction:0.1.4
ports:
- containerPort: 9000
imagePullSecrets:
- name: fff-artifactory
and service yaml that looks like this:
apiVersion: v1
kind: Service
metadata:
name: hmrcaction
labels:
name: hmrcaction
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 9000
selector:
name: hmrcaction
and I have a kubernetes cluster, so I wanted to use this rc to upload my docker to the cluster and I did it like this:
kubectl create -f controller.yaml
but I get some weird status, when I run the command kubectl get pods I get:
NAME READY STATUS RESTARTS AGE
hmrcaction-k9bb6 0/1 ImagePullBackOff 0 40s
what is this?? before the status was ErrImagePull...
please help :)
thanks!
kubectl describe pods -l name=hmrcaction should give you more useful information.

Resources