Kubernetes jenkins deployment api version no matches - jenkins

The following deployment file is working if I'm uploading it from my local machine.
kind: Deployment
apiVersion: apps/v1
metadata:
name: api
namespace: app
spec:
replicas: 2
selector:
matchLabels:
run: api
template:
metadata:
labels:
run: api
spec:
containers:
- name: api
image: gcr.io/myproject/api:1535462260754
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /_ah/health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
The same one is on remote Compute Engine machine which running Jenkins. On this machine, with ssh I'm also able to apply this config. Under the Jenkins shell execute it's always throws
error: unable to recognize "./dist/cluster/api.deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
I tried to change apiVersion to apps/v1beta1 and to extensions/v1beta1 as well.
Don't know what to try else.
Update 1
kubectl version on Compute Engine:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff0 88eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Pla tform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.7-gke.5", GitCommit:"9b635efce81582e1da13b3 5a7aa539c0ccb32987", GitTreeState:"clean", BuildDate:"2018-08-02T23:42:40Z", GoVersion:"go1.9.3b4", Compiler:"gc ", Platform:"linux/amd64"}
Update 2
Run inside Jenkins job shown this.
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Error from server (Forbidden): <html><head><meta http-equiv='refresh' content='1;url=/securityRealm/commenceLogin?from=%2Fversion%3Ftimeout%3D32s'/><script>window.location.replace('/securityRealm/commenceLogin?from=%2Fversion%3Ftimeout%3D32s');</script></head><body style='background-color:white; color:white;'>
Authentication required
<!--
You are authenticated as: anonymous
Groups that you are in:
Permission you need to have (but didn't): hudson.model.Hudson.Read
... which is implied by: hudson.security.Permission.GenericRead
... which is implied by: hudson.model.Hudson.Administer
-->
</body></html>

Probably the kubectl version in your Jenkins server or agent is old. Try running kubectl version from the Jenkins job to check for mismatches.

Thanks to #csanchez I figured out that I was needed to get credentials under jenkins user. For that I just ran this command:
gcloud container clusters get-credentials cluster-1 --zone=my-cluster-zone --project myproject

Related

AWS EKS getting error "networkPlugin cni failed to set up pod"

EKS cluster version:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"clean", BuildDate:"2020-12-19T11:45:27Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Given below is my deployment file :
kind: Deployment
apiVersion: apps/v1
metadata:
name: sample-pod
namespace: front-end
spec:
replicas: 1
selector:
matchLabels:
app: sample-pod
template:
metadata:
labels:
app: sample-pod
spec:
serviceAccountName: my-service-account
containers:
- name: sample-pod
image: <Account-id>.dkr.ecr.us-east-1.amazonaws.com/sample-pod-image:latest
resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 500m
memory: 500Mi
env:
- name: name
value: sample-pod
- name: ACTIVE_SPRING_PROFILE
value: dev
imagePullPolicy: Always
ports:
- name: http
containerPort: 8091
imagePullSecrets:
- name: <my_region>-1-ecr-registry
And this is my docker file.
FROM amazoncorretto:latest
COPY bootstarp.sh /bootstarp.sh
RUN yum -y install aws-cli
CMD ["tail", "-f" , "/bootstarp.sh"]
Steps to reproduce:
kubectl apply -f my-dep.yaml
Let container be create.
Delete deployment using command
kubectl delete -f my-dep.yaml
Recreate using command
apply -f my-dep.yaml
Not a perfect soln but this is how i overcame it.
Root cause: The deployment was in the terminating stage and I was recreating the deployment which involves the reassignment of networking resources and due to deadlock the deployment fails.
Soln: I have added a cool dow period in between the termination and recreation of the deployment. Earlier I was deleting and recreating the deployment in one shot (using a shell script).
Earlier :
kubectl delete-f my-dep.yaml
some more instructions .....
kubectl apply -f my-dep.yaml
Now:
kubectl delete-f my-dep.yaml
some more instructions .....
**sleep 1m 30s**
kubectl apply -f my-dep.yaml
Because of the cool down, I can now predictably deploy the container.
Regards
Amit Meena

Failed to download OpenAPI error with Kubernetes deployment

I'm learning Kubernetes and I've just installed minikube on my mac.
I have a docker image that I'd like to deploy. I created a deployment yaml file which looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sonarqube
spec:
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- image: docker-sonarqube-developer:latest
args:
- -Dsonar.web.context=/
name: sonarqube
env:
- name: SONARQUBE_JDBC_USERNAME
value: sonarqube
- name: SONARQUBE_JDBC_PASSWORD
value: sonarqube
ports:
- containerPort: 9000
name: sonarqube
I am trying to deploy my docker image on minikube with the following command:
kubectl create -f deployment.yaml
But I'm getting an error and I'm not sure what's going on.
W0628 09:18:45.550812 64359 factory_object_mapping.go:423] Failed to download OpenAPI (the server could not find the requested resource), falling back to swagger
error: error validating "k8s/deployment.yaml": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
Minikube is running and I can access the dashboard.
❯ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 17h v1.15.0
The docker image is available locally
❯ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
6fcfdad92d16 docker-sonarqube-developer "./bin/run.sh" 16 hours
Any idea what's wrong?
Thanks!
First Check the kubectl version
Check whether the Minor for both client and server version are same
$Kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.20.2",
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0",
If not,then follow the below steps-
$curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
$chmod +x ./kubectl
$sudo mv ./kubectl /usr/local/bin/kubectl
Now check the version again
$kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2",
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0",
$kubectl create -f deployment.yaml

Failed to use hostPath '/var/lib/docker/containers' as volume in Kubernetes

I have failed to use HostPath /var/lib/docker/containers as a volume with the following error:
Error response from daemon: linux mounts: Path /var/lib/docker/containers is
mounted on /var/lib/docker/containers but it is not a shared or slave mount.
Here is my YAML spec (note: this is just an example for reproducing my problem in doing log collection):
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
namespace: logging
labels:
app: test
spec:
selector:
matchLabels:
app : test
template:
metadata:
labels:
app: test
spec:
containers:
- name: nginx
image: nginx:stable-alpine
securityContext:
privileged: true
ports:
- containerPort : 8003
volumeMounts:
- name: docker
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: docker
hostPath:
path: /var/lib/docker/containers
And my kubernetes version.
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1",
GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean",
BuildDate:"2018-04-12T14:26:04Z", GoVersion:"go1.9.3", Compiler:"gc",
Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0",
GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean",
BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc",
Platform:"linux/amd64"}
Very much appreciating your help!
very appreciated for any help!
You are most probably hit by a version specific issue:
/var/lib/docker/containers is intentionally mounted by Docker with private mount
propagation and thus conflicts with Kubernetes trying to mount this directory
as rslave when running the container
You should try with 1.10.3+ where it is resolved. See the official changelog for kubernetes and check entry related to "Default mount propagation". Also check related (see the error) fluentd issue for more in-depth analysis.
Now, with that said...
David's seasoned comment with question and caution word still stands and I second that: this is quite an eyebrow raiser - nginx pod digging deep into docker engine internals (hope it is just for sake of minimal reproducible example, or log collection case, you know, something...)... Just make sure you know exactly what you are doing and why.

Kubernetes pull from insecure docker registry

I have stacked in this phase:
Have local docker insecure registry and some images in it, e.g. 192.168.1.161:5000/kafka:latest
Have kubernetes cloud cluster, for which I can access only via ~/.kube/config file, e,g. token.
Need to deploy below deployment, but kubernetes cannot pull images, error message:
Failed to pull image "192.168.1.161:5000/kafka:latest": rpc error:
code = Unknown desc = Error response from daemon: Get
https://192.168.1.161:5000/v2/: http: server gave HTTP response to
HTTPS client
apiVersion: v1
kind: Service
metadata:
name: kafka
labels:
app: kafka
spec:
type: NodePort
ports:
- name: port9094
port: 9094
targetPort: 9094
selector:
app: kafka
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka
spec:
replicas: 1
template:
metadata:
labels:
app: kafka
spec:
hostname: kafka
containers:
- name: redis
image: 192.168.1.161:5000/kafka:latest
imagePullPolicy: Always
ports:
- name: port9094
containerPort: 9094
- envFrom:
- configMapRef:
name: env
imagePullSecrets:
- name: regsec
ON Kubernetes cluster I have created secret file "regsec" with this command:
kubectl create secret docker-registry regsec --docker-server=192.168.1.161 --docker-username=<name from config file> --docker-password=<token value from config file>
cat ~/.docker/config.json
{
"auths": {},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.06.0-ce (linux)"
}
cat /etc/docker/daemon.json
{
"insecure-registries":["192.168.1.161:5000"]
}
kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
docker version
Client:
Version: 18.06.0-ce
API version: 1.38
Go version: go1.10.3
Git commit: 0ffa825
Built: Wed Jul 18 19:09:54 2018
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.06.0-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: 0ffa825
Built: Wed Jul 18 19:07:56 2018
OS/Arch: linux/amd64
Experimental: false
You need to go to each of your nodes, edit the file /etc/default/docker.json and add the following in it:
{
"insecure-registries": ["192.168.1.161:5000"]
}
I used minikube for my Kubernetes cluster.
When I tried to apply a Pod with an image from my private docker registry (that is local, without authentication), the Pod didn't run and describe had a message indicating the repository wasn't reached (paraphrasing).
To fix this, I had to configure insecure-registry for the Docker daemon. According to the Docker docs, this can be done in two ways: as a flag passed to the dockerd command, or by modifying /etc/docker/daemon.json (on Linux).
However, as I used minikube to create and configure the cluster and daemon, I instead followed the minikube docs to set the flag --insecure-registry. The complete command is:
minikube start --insecure-registry "DOMAIN_DOCKER_REGISTRY:PORT_DOCKER_REGISTRY"
I have come to this thread over and over again trying to find the correct answer to get rid of certificates issues, without much success.
I finally solved the problem by installing the self signed certificate root on the system for all the kubernetes machines. That finally fixes the issue. On Ubuntu, you can import via:
sudo mv internal-ca.cert /usr/local/share/ca-certificates/internal-ca.crt
sudo update-ca-certificates
Keep in mind that if you have a certificate chain, it will require the root certificate, not the immediate certficate. You can check if the import worked by running:
openssl s_client -connect <YOUR REGISTRY HERE> -showcerts < /dev/null
You should see something like:
CONNECTED(00000005)
as the response.

How can I access from network newly deployed pod in kubernetes?

I am a newbie in kubernetes and i know i am missing something small but cannot see what.
I am creating a pod with file: kubectl create -f mysql.yaml
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- resources:
limits :
cpu: 2
image: mysql
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
# change this
value: TestingDB1
ports:
- containerPort: 3306
name: mysql
and a service with: kubectl create -f mysql_service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: mysql
name: mysql
spec:
externalIPs:
- 10.19.13.127
ports:
- port: 3306
selector:
name: mysql
Output of "kubectl version"
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"d33fd89e399396658aed4e48dfe7d5d8d50ac6e8", GitTreeState:"clean", BuildDate:"2017-05-26T17:08:24Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"d33fd89e399396658aed4e48dfe7d5d8d50ac6e8", GitTreeState:"clean", BuildDate:"2017-05-26T17:08:24Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Output of "kubectl cluster-info"
Kubernetes master is running at http://localhost:8080
Output of "kubectl get pods"
NAME READY STATUS RESTARTS AGE
mysql 1/1 Running 0 20m
Output of "kubectl get svc"
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 18h
mysql 10.254.129.206 10.19.13.127 3306/TCP 1h
Output of "kubectl get no"
NAME STATUS AGE
10.19.13.127 Ready 19h
Output of "docker ps"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
74ea1fb2b383 mysql "docker-entrypoint.sh" 3 minutes ago Up 3 minutes k8s_mysql.ae7893ad_mysql_default_e58d1c09-4a8e-11e7-9baf-fa163ee3f5d9_793d8d7c
I can see the pod is being created normally. Even when i connect to the container I am able to log in to mysql with credentials.
My question is:
How can i access/expose port running on my kubernetes node from my network ? For example I want to do a telnet from my PC to the kubernetes node where the mysql pod is running.
Thank you !
Below command verifies that the Redis server is running in the pod and listening on which port (generally it run on 6379 port) :
kubectl get pods redis-master --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
Output : 6739
Following command gives you port a pod listening on, so you can create route or port forwarding to access the service
kubectl get pod <pod_name> -o "go-template={{(index (index .spec.containers 0).ports 0).containerPort}}

Resources