jenkins ansible-plugin can't find ansible role - jenkins

hope you guys are doing great .
i have a problem running a playbook using ansible plugin in jenkins
when i run the build it gives me this error :
ansible-demo] $ /usr/bin/ansible-playbook /var/lib/jenkins/workspace/ansible-demo/ansible-openshift.yaml -f 5
ERROR! the role 'ansible.kubernetes-modules' was not found in /var/lib/jenkins/workspace/ansible-demo/roles:/var/lib/jenkins/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/var/lib/jenkins/workspace/ansible-demo
The error appears to be in '/var/lib/jenkins/workspace/ansible-demo/ansible-openshift.yaml': line 6, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
roles:
- role: ansible.kubernetes-modules
^ here
FATAL: command execution failed
hudson.AbortException: Ansible playbook execution failed
at org.jenkinsci.plugins.ansible.AnsiblePlaybookBuilder.perform(AnsiblePlaybookBuilder.java:262)
at org.jenkinsci.plugins.ansible.AnsiblePlaybookBuilder.perform(AnsiblePlaybookBuilder.java:232)
at jenkins.tasks.SimpleBuildStep.perform(SimpleBuildStep.java:123)
at hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:78)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:806)
at hudson.model.Build$BuildExecution.build(Build.java:198)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:514)
at hudson.model.Run.execute(Run.java:1888)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:99)
at hudson.model.Executor.run(Executor.java:431)
ERROR: Ansible playbook execution failed
Finished: FAILURE
here is my yaml file that i use to deploy in openshift cluster
---
- hosts: 127.0.0.1
become: yes
become_user: oassaghir
roles:
- role: ansible.kubernetes-modules
vars:
ansible_python_interpreter: /usr/bin/python3
tasks:
- name: Try to login to Okd cluster
k8s_auth:
host: https://127.0.0.1:8443
username: developer
password: ****
validate_certs: no
register: k8s_auth_result
- name: deploy hello-world pod
k8s:
state: present
apply: yes
namespace: myproject
host: https://127.0.0.1:8443
api_key: "{{ k8s_auth_result.k8s_auth.api_key }}"
validate_certs: no
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-openshift
labels:
name: hello-openshift
spec:
selector:
matchLabels:
app: hello-openshift
replicas: 1
template:
metadata:
labels:
app: hello-openshift
spec:
containers:
- name: hello-openshift
image: openshift/hello-openshift
ports:
- containerPort: 8080
protocol: TCP
resources:
requests:
cpu: 300m
memory: 64Mi
limits:
cpu: 600m
memory: 128Mi
when i run the code in my machine it works but using jenkins no
in my mahcine :
[oassaghir#openshift ansible-demo]$ sudo ansible-playbook ansible-openshift.yaml
PLAY [127.0.0.1] **************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************
ok: [127.0.0.1]
TASK [ansible.kubernetes-modules : Install latest openshift client] ***********************************************************************
skipping: [127.0.0.1]
TASK [Try to login to Okd cluster] ********************************************************************************************************
ok: [127.0.0.1]
TASK [deploy hello-world pod] *************************************************************************************************************
ok: [127.0.0.1]
PLAY RECAP ********************************************************************************************************************************
127.0.0.1 : ok=3 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
can someone help plz !

Related

Knative service deployment fails with reason RevisionMissing

I have deployed a service on Knative. I iterated on the service code/Docker image and I try to redeploy it at the same address. I proceeded as follow:
Pushed the new Docker image on our private Docker repo
Updated the service YAML file to point to the new Docker image (see YAML below)
Delete the service with the command: kubectl -n myspacename delete -f myservicename.yaml
Recreate the service with the command: kubectl -n myspacename apply -f myservicename.yaml
During the deployment, the service shows READY = Unknown and REASON = RevisionMissing, and after a while, READY = False and REASON = ProgressDeadlineExceeded. When looking at the logs of the pod with the following command kubectl -n myspacename logs revision.serving.knative.dev/myservicename-00001, I get the message:
no kind "Revision" is registered for version "serving.knative.dev/v1" in scheme "pkg/scheme/scheme.go:28"
Here is the YAML file of the service:
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: myservicename
namespace: myspacename
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
autoscaling.knative.dev/metric: concurrency
autoscaling.knative.dev/target: '1'
autoscaling.knative.dev/minScale: '0'
autoscaling.knative.dev/maxScale: '5'
autoscaling.knative.dev/scaleDownDelay: 60s
autoscaling.knative.dev/window: 600s
spec:
tolerations:
- key: nvidia.com/gpu
operator: Exists
effect: NoSchedule
volumes:
- name: nfs-volume
persistentVolumeClaim:
claimName: myspacename-models-pvc
imagePullSecrets:
- name: myrobotaccount-pull-secret
containers:
- name: myservicename
image: quay.company.com/project/myservicename:0.4.0
ports:
- containerPort: 5000
name: user-port
protocol: TCP
resources:
limits:
cpu: "4"
memory: 36Gi
nvidia.com/gpu: 1
requests:
cpu: "2"
memory: 32Gi
volumeMounts:
- name: nfs-volume
mountPath: /tmp/static/
securityContext:
privileged: true
env:
- name: CLOUD_STORAGE_PASSWORD
valueFrom:
secretKeyRef:
name: myservicename-cloud-storage-password
key: key
envFrom:
- configMapRef:
name: myservicename-config
The protocol I followed above is correct, the problem was because of a bug in the code of the Docker image that Knative is serving. I was able to troubleshoot the issue by looking at the logs of the pods as follow:
First run the following command to get the pod name: kubectl -n myspacename get pods. Example of pod name = myservicename-00001-deployment-56595b764f-dl7x6
Then get the logs of the pod with the following command: kubectl -n myspacename logs myservicename-00001-deployment-56595b764f-dl7x6

download failed : x509: certificate signed by unknown authority

I just started learning Docker and Kubernetes. Installed minikube and docker on my windows machine. I am able to pull image from docker using docker pull command but getting below error with kubectl. Please help.
Warning Failed 18s (x2 over 53s) kubelet Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = error pulling image configuration: download failed after attempts=6: x509: certificate signed by unknown authority Warning Failed 18s (
This is my yml file.
apiVersion: v1
kind: Pod
metadata:
name: nginx1
spec:
containers:
name: nginx1
image: nginx:alpine
ports:
containerPort: 80
containerPort: 443
Thanks in advance.
enter image description here
Your yaml file looks messed up. Please ensure you write the yaml file properly. Can you test if this yaml file works for you? I have used a different image in the below mention yaml file:
apiVersion: v1
kind: Pod
metadata:
name: nginx1
namespace: test
spec:
containers:
- name: webserver
image: nginx:latest
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "200m"
limits:
memory: "128Mi"
cpu: "350m"

skaffold not modyfying image tag in custom resource yaml file

I am trying to build a sidecar image from skaffold and then push it onto my minikube cluster.
My skaffold.yaml file looks like this :
apiVersion: skaffold/v2beta28
kind: Config
metadata:
name: sidecar
build:
artifacts:
- image: amolgautam25/sidecar
docker:
dockerfile: Dockerfile
deploy:
kubectl:
manifests:
- pg-example.yaml
My pod deployment file (pg_example.yaml) looks like this :
apiVersion: acid.zalan.do/v1
kind: postgresql
metadata:
name: vmw-test
spec:
databases:
foo: zalando
numberOfInstances: 1
podAnnotations:
prometheus.io/port: "9187"
prometheus.io/scrape: "true"
postgresql:
parameters:
log_filename: postgresql.log
log_rotation_age: "0"
log_rotation_size: "0"
version: "14"
preparedDatabases:
bar: {}
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: "0"
memory: "0"
spiloFSGroup: 103
spiloRunAsGroup: 103
spiloRunAsUser: 101
teamId: vmw
users:
foo_user: []
zalando:
- superuser
- createdb
volume:
size: 1Gi
sidecars:
- name: "postgres-exporter"
image: quay.io/prometheuscommunity/postgres-exporter
env:
# The default "host all all 127.0.0.1/32 md5" rule in pg_hba.conf
# allows us to connect over 127.0.0.1 without TLS as long as we have the password
- name: DATA_SOURCE_URI
value: "localhost:5432/postgres?sslmode=disable"
- name: DATA_SOURCE_USER
value: postgres
- name: DATA_SOURCE_PASS
valueFrom:
secretKeyRef:
key: password
name: postgres.vmw-test.credentials.postgresql.acid.zalan.do
ports:
- name: exporter
containerPort: 9187
protocol: TCP
resources:
requests:
cpu: 500m
memory: 500Mi
limits:
cpu: 1000m
memory: 1Gi
- name: "metrics-sidecar"
image: amolgautam25/sidecar
In the minikube images i can see the image built by skaffold
<usename-hidden>$ minikube image ls -p my-profile
docker.io/amolgautam25/sidecar:6b1af6a1fd25825dc63fd843f951e10c98bd9eb87d80cd8cf81da5641dc041e2
However, minikube refuses to use that image. Here is the error i get when i do describe pod:
Normal Pulling 7s kubelet Pulling image "amolgautam25/sidecar"
Warning Failed 6s kubelet Failed to pull image "amolgautam25/sidecar": rpc error: code = Unknown desc = Error response from daemon: pull access denied for amolgautam25/sidecar, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Warning Failed 6s kubelet Error: ErrImagePull
Normal BackOff 3s (x2 over 5s) kubelet Back-off pulling image "amolgautam25/sidecar"
Warning Failed 3s (x2 over 5s) kubelet Error: ImagePullBackOff
But it seems that minikube already has that image. I have tried variations of docker.io/amolgautam25/sidecar etc but it does not work.
Any help would be appreciated.
Edit:
On further investigation i have found out that the skaffold is not modifying the 'pg-example.yaml' file. For some reason it does not change the 'image' tag to the one that is built by skaffold. I think the answer lies in : https://skaffold.dev/docs/tutorials/skaffold-resource-selector/ ( still investigating )

Pod cannot find mount path on Docker Desktop(Win)

On Docker Desktop(Win, open file sharing already), create pv/pvc and bound successfully, but start pod failed shows:
Warning Failed 4s (x4 over 34s) kubelet, docker-desktop Error: stat /c/cannot-found: no such file or directory
Have already created the "cannot-found" folder on my root which path is C:\cannot-found
Here are my pv/pvc and pod yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: global-volume
labels:
pv_pvc_label: nfs
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
hostPath:
path: "/c/cannot-found"
persistentVolumeReclaimPolicy: Delete
storageClassName: standard
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: global-volume
spec:
storageClassName: standard
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv_pvc_label: nfs
volumeName: global-volume
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: alpine
namespace: default
spec:
selector:
matchLabels:
component: centos
module: centos
replicas: 1
template:
metadata:
labels:
component: centos
module: centos
spec:
volumes:
- name: nfs
persistentVolumeClaim:
claimName: global-volume
containers:
- image: centos
command:
- /bin/sh
- "-c"
- "touch /root/test.txt; echo \"test mount\">/root/test.txt; cp /root/test.txt /tmp/test.txt; sleep 60m"
imagePullPolicy: IfNotPresent
name: alpine
volumeMounts:
- name: nfs
mountPath: /tmp
subPath: mountPath
restartPolicy: Always
My pv/pvc status:
leih#CNleih01 MINGW64 /c/dev/SAW/x-workspace/deployers/common (leih/workspace)
$ kubectl create -f global-volume-tmp.yaml
persistentvolume/global-volume created
persistentvolumeclaim/global-volume created
leih#CNleih01 MINGW64 /c/dev/SAW/x-workspace/deployers/common (leih/workspace)
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
global-volume Pending global-volume 0 standard 7s
leih#CNleih01 MINGW64 /c/dev/SAW/x-workspace/deployers/common (leih/workspace)
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
global-volume Bound global-volume 1Gi RWX standard 10s
leih#CNleih01 MINGW64 /c/dev/SAW/x-workspace/deployers/common (leih/workspace)
$ kubectl scale deployment alpine --replicas=1
deployment.apps/alpine scaled
leih#CNleih01 MINGW64 /c/dev/SAW/x-workspace/deployers/common (leih/workspace)
$ kubectl get po |grep alpine
alpine-6559ddcb88-n262l 0/1 CreateContainerConfigError 0 14s
leih#CNleih01 MINGW64 /c/dev/SAW/x-workspace/deployers/common (leih/workspace)
$ kubectl describe po alpine-6559ddcb88-n262l |grep Error
Reason: CreateContainerConfigError
Warning Failed 4s (x4 over 34s) kubelet, docker-desktop Error: stat /c/cannot-found: no such file or directory
File sharing:
k8s version: v1.16.6-beta.0
docker desktop version(win): Docker Desktop Community 2.3.0.2
docker version: v19.03.8
How can I solve it?
Update:
After rollback docker desktop from v2.3.0.2 to v2.2.0.5, it works fine.

When Using NFS volume, container not starting in Kubernetes

i am using nfs for volume in kubernetes pod using deployment.
Below are details of all files.
Filename :- nfs-server.yaml
kind: Service
apiVersion: v1
metadata:
name: nfs-service
spec:
selector:
role: nfs
ports:
# Open the ports required by the NFS server
# Port 2049 for TCP
- name: tcp-2049
port: 2049
protocol: TCP
# Port 111 for UDP
- name: udp-111
port: 111
protocol: UDP
---
kind: Pod
apiVersion: v1
metadata:
name: nfs-server-pod
labels:
role: nfs
spec:
containers:
- name: nfs-server-container
image: cpuguy83/nfs-server
securityContext:
privileged: true
args:
# Pass the paths to share to the Docker image
- /exports
Both service and pod are running. Below is the output.
Now i have to use this in my web-server. Below is the details of the deployment file for web.
Filename :- deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: apache-service
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: apache-deployment
spec:
selector:
matchLabels:
app: apache
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: apache
spec:
volumes:
- name: nfs-volume
nfs:
server: 10.99.56.195
path: /exports
containers:
- name: apache
image: mobingi/ubuntu-apache2-php7:7.2
ports:
- containerPort: 80
volumeMounts:
- name: nfs-volume
mountPath: /var/www/html
when i am running this file without volume everything works fine. But when i am running this with nfs than pod give following error.
kubectl describe pod apache-deployment-577ffcf9bd-p8s75
Give following output:-
Name: apache-deployment-577ffcf9bd-p8s75
Namespace: default
Priority: 0
Node: worker-node2/10.160.0.4
Start Time: Tue, 09 Jul 2019 09:53:39 +0000
Labels: app=apache
pod-template-hash=577ffcf9bd
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/apache-deployment-577ffcf9bd
Containers:
apache:
Container ID:
Image: mobingi/ubuntu-apache2-php7:7.2
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-p9qdb (ro)
/var/www/html from nfs-volume (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nfs-volume:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 10.244.1.50
Path: /exports
ReadOnly: false
default-token-p9qdb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-p9qdb
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m21s default-scheduler Successfully assigned default/apache-deployment-577ffcf9bd-p8s75 to worker-node2
Warning FailedMount 4m16s kubelet, worker-node2 MountVolume.SetUp failed for volume "nfs-volume" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume --scope -- mount -t nfs 10.244.1.50:/exports /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume
Output: Running scope as unit: run-r3a55a8a3287448a59f7e4dbefa0333af.scope
mount.nfs: Connection timed out
Warning FailedMount 2m10s kubelet, worker-node2 MountVolume.SetUp failed for volume "nfs-volume" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume --scope -- mount -t nfs 10.244.1.50:/exports /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume
Output: Running scope as unit: run-r5fe7befa141d4f989e14b291afa43208.scope
mount.nfs: Connection timed out
Warning FailedMount 2m3s (x2 over 4m18s) kubelet, worker-node2 Unable to mount volumes for pod "apache-deployment-577ffcf9bd-p8s75_default(29114119-5815-442a-bb97-03fa491206a4)": timeout expired waiting for volumes to attach or mount for pod "default"/"apache-deployment-577ffcf9bd-p8s75". list of unmounted volumes=[nfs-volume]. list of unattached volumes=[nfs-volume default-token-p9qdb]
Warning FailedMount 4s kubelet, worker-node2 MountVolume.SetUp failed for volume "nfs-volume" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume --scope -- mount -t nfs 10.244.1.50:/exports /var/lib/kubelet/pods/29114119-5815-442a-bb97-03fa491206a4/volumes/kubernetes.io~nfs/nfs-volume
Output: Running scope as unit: run-rd30c104228ae43df933839b6da469107.scope
mount.nfs: Connection timed out
Can Anyone please help to solve this problem.
Make sure there is no firewall between the nodes
Make sure nfs-utils installed on cluster nodes
Here is a blog posts about the docker image that you are using for nfs server, you need to do some tweaks for ports to be used by nfs server.
https://medium.com/#aronasorman/creating-an-nfs-server-within-kubernetes-e6d4d542bbb9

Resources