I'm trying to deploy my docker image into the cluster using Jenkins. my Jenkins application is running in an EC2 ubuntu server. Initially, when I tried I was getting this error.
I referred to this stack
and added Jenkins users IAM arn to the Kube config file using
kubectl edit configmap aws-auth -n kube-system
After updating the config file when I run my pipeline, I get this new error
My question is,
This Jenkins User is an admin user, then why am I getting this access control error?
Please help me with this?
As mentioned in the comment
The service account jenkins doesn't have privileges to list pods kube-system. You would have to create ClusterRoleBinding and ClusterRole to make it work.
You can do that with kubectl create like in above #Gowtham Babu answer.
There is an example from below medium tutorial
Also when rbac is set to true, following have to be done in order to allow jenkins pod access to “kube-system” namespace of the kubernetes cluster.
Create a clusterrolebinding with permissions “cluster-admin”,
kubectl create clusterrolebinding jenkinsrolebinding - -clusterrole=cluster-admin - - group=system:serviceaccounts:jenkins
Additional resources:
https://medium.com/#pallavisengupta/jenkins-kubernetes-authentication-and-authorization-fa6966356c90
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
Thanks, #jakub. I was able to solve the error by creating a cluster binding role.
kubectl create clusterrolebinding NAME --clusterrole=NAME [--user=username] [--group=groupname] [--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none]
I got similar issue:
$ kubectl logs demo
panic: certificatesigningrequests.certificates.k8s.io "csr-xx9l9" is forbidden: User "system:serviceaccount:default:default" cannot get resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope
To resolve it I did this by looking at answer from Gowtham Babu posted prior this post:
$ kubectl create clusterrolebinding cesar3 \
--clusterrole=cluster-admin \
--user=system:serviceaccount:default:default \
--group=certificates.k8s.io
Related
We've just bought a docker hub pro user so that we don't have to worry about pull rate limits.
Now, I'm currently having a problem trying to to set the docker hub pro user. Is there a way to set the credentials for hub.docker.com globally?
In the kubernetes docs I found following article: Kubernetes | Configure nodes for private registry
On every node I executed a docker login with the credentials, copied the config.json to /var/lib/kubelet and restarted kubelet. But I'm still getting an ErrImagePull because of those rate limits.
I've copied the config.json to the following places:
/var/lib/kubelet/config.json
/var/lib/kubelet/.dockercfg
/root/.docker/config.json
/.docker/config.json
There is an option to use a secret for authentification. The problem is, that we would need to edit hundreds of statefulsets, deployments and deamonsets. So it would be great to set the docker user globally.
Here's the config.json:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "[redacted]"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.13 (linux)"
}
}
To check if it actually logs in with the user I've created an access token in my account. There I can see the last login with said token. The last login was when I executed the docker login command. So the images that I try to pull aren't using those credentials.
Any ideas?
Thank you!
Kubernetes implements this using image pull secrets. This doc does a better job at walking through the process.
Using the Docker config.json:
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
Or you can pass the settings directly:
kubectl create secret docker-registry <name> --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
Then use those secrets in your pod definitions:
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
Or to use the secret at a user level (Add image pull secret to service account)
kubectl get serviceaccounts default -o yaml > ./sa.yaml
open the sa.yaml file, delete line with key resourceVersion, add lines with imagePullSecrets: and save.
kind: ServiceAccount
metadata:
creationTimestamp: "2020-11-22T21:41:53Z"
name: default
namespace: default
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: afad07eb-f58e-4012-9ccf-0ac9762981d5
secrets:
- name: default-token-gkmp7
imagePullSecrets:
- name: regcred
Finally replace the serviceaccount with the new updated sa.yaml file
kubectl replace serviceaccount default -f ./sa.yaml
We use docker-registry as a proxy cache in our Kubernetes clusters, Docker Hub credentials may be set in the configuration. Docker daemons on Kubernetes nodes are configured to use the proxy by setting registry-mirror in /etc/docker/daemon.json.
This way, you do not need to modify any Kubernetes manifest to include pull secrets. Our complete setup is described in a blog post.
I ran into the same problem as OP. It turns out, putting docker credential files for kubelet works for kubernetes version 1.18 or higher. I have tested here and can confirm that kubelet 1.18 picks up the config.json placed in /var/lib/kubelet correctly and authenticates the docker registry.
kubectl is installed correctly but the expose does not work what am I missing here ?
shivam#shivam-SVS151290X:~$ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/shivam/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/shivam/.minikube/profiles/minikube/client.crt
client-key: /home/shivam/.minikube/profiles/minikube/client.key
shivam#shivam-SVS151290X:~$ kubectl
kubectl controls the Kubernetes cluster manager.
Find more information at:
https://kubernetes.io/docs/reference/kubectl/overview/
Basic Commands (Beginner):
create Create a resource from a file or from stdin.
expose Take a replication controller, service, deployment or pod and
expose it as a new Kubernetes Service
run Run a particular image on the cluster
set Set specific features on objects
Basic Commands (Intermediate):
explain Documentation of resources
get Display one or many resources
edit Edit a resource on the server
delete Delete resources by filenames, stdin, resources and names, or by
resources and label selector
Deploy Commands:
rollout Manage the rollout of a resource
scale Set a new size for a Deployment, ReplicaSet or Replication
Controller
autoscale Auto-scale a Deployment, ReplicaSet, or ReplicationController
Cluster Management Commands:
certificate Modify certificate resources.
cluster-info Display cluster info
top Display Resource (CPU/Memory/Storage) usage.
cordon Mark node as unschedulable
uncordon Mark node as schedulable
drain Drain node in preparation for maintenance
taint Update the taints on one or more nodes
Troubleshooting and Debugging Commands:
describe Show details of a specific resource or group of resources
logs Print the logs for a container in a pod
attach Attach to a running container
exec Execute a command in a container
port-forward Forward one or more local ports to a pod
proxy Run a proxy to the Kubernetes API server
cp Copy files and directories to and from containers.
auth Inspect authorization
Advanced Commands:
diff Diff live version against would-be applied version
apply Apply a configuration to a resource by filename or stdin
patch Update field(s) of a resource using strategic merge patch
replace Replace a resource by filename or stdin
wait Experimental: Wait for a specific condition on one or many
resources.
convert Convert config files between different API versions
kustomize Build a kustomization target from a directory or a remote url.
Settings Commands:
label Update the labels on a resource
annotate Update the annotations on a resource
completion Output shell completion code for the specified shell (bash or
zsh)
Other Commands:
alpha Commands for features in alpha
api-resources Print the supported API resources on the server
api-versions Print the supported API versions on the server, in the form of
"group/version"
config Modify kubeconfig files
plugin Provides utilities for interacting with plugins.
version Print the client and server version information
Usage:
kubectl [flags] [options]
Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all
commands).
shivam#shivam-SVS151290X:~$ **kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080**
Error from server (AlreadyExists): pods "**hello-minikube" already exists**
shivam#shivam-SVS151290X:~$ kubectl expose deployment hello-minikube --type=NodePort
Error from server (NotFound): deployments.apps "**hello-minikube" not found**
shivam#shivam-SVS151290X:~$
I'm trying to connect Jenkins to a fresh K8S cluster via (Kubernetes plugin), however, I'm seeing the following error when I attempt to test.
Then I have tried to add a secret file to Jenkins credentials of my ~/.kube/config I'm seeing this error.
k8s version is 1.15.4 and Jenkins 2.190.1
Any ideas?
You need to use "Secret text" type of credentials with service account token. Create service account as Rodrigo Loza mentioned. Example creates namespace jenkins and service account with admin rights in it:
kubectl create namespace jenkins && kubectl create serviceaccount jenkins --namespace=jenkins && kubectl describe secret $(kubectl describe serviceaccount jenkins --namespace=jenkins | grep Token | awk '{print $2}') --namespace=jenkins && kubectl create rolebinding jenkins-admin-binding --clusterrole=admin --serviceaccount=jenkins:jenkins --namespace=jenkins
Jenkins has either not created its ServiceAccount or ClusterRoleBinding with permissions to access the kubernetes api. That is why you are seeing it cannot list the pod resources. Have you deployed jenkins using helm chart? If this is correct, then is your tiller service account correctly setup?
Error to access portal management api.
Management API unreachable or error occurs, please check logs
I'm using Gravitee 1.27.1, running on the Kubernetes with Nginx Ingress.
Mongo:
ElasticSearch:
kubectl create -f . (My files - I'm using cluster)
Nginx Ingress:
kubectl create -f . (My files)
Gravitee:
helm install --name api-gateway gravitee -f values.yaml --namespace my-namespace
All Pods are Health (ok):
kubectl get pod -n my-namespace
I found the solution, check in your page, exist HTTP and HTTPS this is a problem. Access with https://api-gateway.mydomain.com.
Success, I hope it helps!
I was trying to deploy a service to a Kubernetes cluster and first got the following error:
Error: container has runAsNonRoot and image will run as root
After some googling, I found out that there is a Pod Security Policy which doesn't allow me to run images as root, as suggested in the error.
I found out that adding the following securityContext configuration in my deployment definition would maybe solve my problem:
spec:
securityContext:
runAsUser: [uID]
fsGroup: [fsID]
I couldn't find a way though to get the user id for a given username. Is it possible using kubectl? Or do I have to somehow assign my own userId/groupId?
As an example, let's say I am using the minikube context:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: C:\...\client.crt
client-key: C:\...\client.key
Thanks!
I couldn't find a way though to get the user id for a given username. Is it possible using kubectl? Or do I have to somehow assign my own userId/groupId?
You can run id command on your deployment something like kubectl exec -it <<pod name>> -- sh to see the user, group id for the said username, in this case, the current user context.
The uid and gid is shared between the Linux kernel and the Docker container. You can get the uid and gid to be used to run your containers by looking them up in /etc/passwd file of the host. You can create a user in your Linux host and use the corresponding ids or create the user in your Docker image with known ids and use them.
if the user has cluster admin role, the PSP will not applied for the user.
You can keep value zero for uID and fsID on securityContext if the minikube user is cluster admin.
You can verify this from the client-certificate file. The CN value on Subject column will provide the username and group details.
openssl x509 -in C:\...\client.crt -text -noout
or you have to do the changes on your Dockerfil. That will run the commands as non root user.