Currently I am handling OIDC using OAuth2-proxy and Istio. We would now like to upgrade to Anthos since we are mainly on GCP. Everything works but I need to configure envoyExtAuthzHttp. Previously I would run kubectl edit configmap istio -n istio-system and add the following...
extensionProviders:
- name: oauth2-proxy
envoyExtAuthzHttp:
service: http-oauth-proxy.istio-system.svc.cluster.local
port: 4180
includeRequestHeadersInCheck: ['cookie']
headersToUpstreamOnAllow: ['authorization']
headersToDownstreamOnDeny: ['content-type', 'set-cookie']
However, ASM does not seem to install that config map...
Error from server (NotFound): configmaps "istio" not found
I noticed there is an istio-asm-managed config map. So I tried adding the config to that but when I do I am not sure how to restart ASM as this command I am used to using isn't working kubectl rollout restart deployment/istiod -n istio-system.
When I try to go to the site instead of being redirected I see...
RBAC: access denied
What worked for me, after studying what asmcli does when you follow the migration steps here, is setting this configmap in istio-system before enabling the Anthos Service Mesh:
apiVersion: v1
data:
mesh: |
extensionProviders:
...<your settings here>...
kind: ConfigMap
metadata:
name: istio-asm-managed-rapid
namespace: istio-system
I have not verified whether it was actually necessary to do this before enabling the ASM, but that is how I did it.
Related
Problem Facing: When I try to run kubectl apply command on both the files below and try to see the app in the browser in http://192.168.49.2:30080/ the app did not render.I tried to run minikube service fleetman - webapp --url but still no progress . Please Help !!!
Additional information :minikube ip -192.168.49.2 .
Note:I have installed docker Desktop app on my mac book air catalina.
Browser message: This site can’t be reached 192.168.49.2 took too long to respond.
Docker image Link :https://hub.docker.com/r/richardchesterwood/k8s-fleetman-webapp-angular
first-pod.yaml file
apiVersion: v1
kind: Pod
metadata:
name: webapp
labels :
mylabelname: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0
webapp-services.yaml file
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp
spec:
# This defines which pods are going to be represented by this Service
# The service becomes a network endpoint for either other services
# or maybe external users to connect to (eg browser)
selector:
mylabelname: webapp
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
Try creating minikube with driver none:
$ minikube start --driver=none
The none driver allows advanced minikube users to skip VM creation, allowing minikube to be run on a user-supplied VM.
Hence you will be able to communicate to your app via your host (ie. user-supplied VM) network address.
I have a helm chart that installs/creates an instance of our app. Our app consist of multiple micro-services and one of them is nginx. The nginx service is of type loadbalancer.
So when user first tries to hit the loadbalancer IP from browser, I want to open a web page which will ask him to bind some domains (e.g. a.yourdomain.com and b.yourdomain.com) with the loadbalancer IP and once he does that, he will click on "verify" button and at that time I want to check on the server side if the domains are correctly pointing to the loadbalancer IP or not.
Now the problem is how can I get the loadbalancer external IP inside the nginx pod so that I can ping the domains and check if they are poining to the loadbalancer IP or not.
Edit
Note: I would like to avoid using kubectl because I do not want to install this extra utility for one time job.
I have found a solution, tested and it's working.
To find ExternalIP associated with nginx service of type LoadBalancer you want to create a service account:
kubectl create serviceaccount hello
and also create Role and RoleBindind like folllowing:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: read-services
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-services
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: read-services
subjects:
- kind: ServiceAccount
name: hello
namespace: default
Then you create your pod with serviceAccount: hello
and now you can make a curl request to api-server like shown in k8s documentation:
APISERVER=https://kubernetes.default.svc
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
TOKEN=$(cat ${SERVICEACCOUNT}/token)
CACERT=${SERVICEACCOUNT}/ca.crt
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/namespaces/$NAMESPACE/services/nginx/
under .status.loadBalancer.ingress[0].ip should be IP you are looking for.
Let me know if it was helpful.
The value of external IP will be in the status of service object.
kubectl get svc $SVC_NAME -n $NS_NAME -o jsonpath="{.status.loadBalancer.ingress[*].ip}” will get the externalIP.
I found the solution, the trick is to call the k8s api server with the default token that is seeded by k8s. These two simple commands will do the trick:
KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" \
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/<your_namespace>/services/nginx \
| jq -r '.status.loadBalancer.ingress[0].ip'
I'm working for a spinnaker for create a new CD pipeline.
I've deployed halyard in a docker container on my computer, and also deployed spinnaker from it to the Google Kubernetes Engine.
After all of them, I've prepared a new ingress yaml file, shown as below.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-cloud
namespace: spinnaker
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: spin-deck
servicePort: 9000
After accessing the spinnaker UI via a public IP, I got an error, shown as below.
Error fetching applications. Check that your gate endpoint is accessible.
After all of them, I've checked the docs about it and I've run some commands shown as below.
I've checked the service data on my K8S cluster.
spin-deck NodePort 10.11.245.236 <none> 9000:32111/TCP 1h
spin-gate NodePort 10.11.251.78 <none> 8084:31686/TCP 1h
For UI
hal config security ui edit --override-base-url "http://spin-deck.spinnaker:9000"
For API
hal config security api edit --override-base-url "http://spin-gate.spinnaker:8084"
After running these commands and redeploying spinnaker, the error repeated itself.
How can I solve the problem of accessing the spinnaker gate from the UI?
--override-base-url should be populated without port.
I've bootstrapped with kubeadm Kubernetes 1.9 RBAC cluster and I've started inside a POD Jenkins based on jenkins/jenkins:lts. I would like to try out https://github.com/jenkinsci/kubernetes-plugin .
I have already created a serviceaccount based on the proposal in https://gist.github.com/lachie83/17c1fff4eb58cf75c5fb11a4957a64d2
> kubectl -n dev-infra create sa jenkins
> kubectl create clusterrolebinding jenkins --clusterrole cluster-admin --serviceaccount=dev-infra:jenkins
> kubectl -n dev-infra get sa jenkins -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2018-02-16T12:06:26Z
name: jenkins
namespace: dev-infra
resourceVersion: "1295580"
selfLink: /api/v1/namespaces/dev-infra/serviceaccounts/jenkins
uid: d040041c-1311-11e8-a4f8-005056039a14
secrets:
- name: jenkins-token-vmt79
> kubectl -n dev-infra get secret jenkins-token-vmt79 -o yaml
apiVersion: v1
data:
ca.crt: LS0tL...0tLQo=
namespace: ZGV2LWluZnJh
token: ZXlK...tdVE=
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: jenkins
kubernetes.io/service-account.uid: d040041c-1311-11e8-a4f8-005056039a14
creationTimestamp: 2018-02-16T12:06:26Z
name: jenkins-token-vmt79
namespace: dev-infra
resourceVersion: "1295579"
selfLink: /api/v1/namespaces/dev-infra/secrets/jenkins-token-vmt79
uid: d041fa6c-1311-11e8-a4f8-005056039a14
type: kubernetes.io/service-account-token
After that I go to Manage Jenkins -> Configure System -> Cloud -> Kubernetes and set the Kubernetes URL to the Cluster API that I use also in my kubectl KUBECONFIG server: url:port.
When I hit test connection I get "Error testing connection https://url:port: Failure executing: GET at: https://url:port/api/v1/namespaces/dev-infra/pods. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods is forbidden: User "system:serviceaccount:dev-infra:default" cannot list pods in the namespace "dev-infra".
I don't want to give to the dev-infra:default user a cluster-admin role and I want to use the jenkins sa I created. I can't understand how to configure the credentials in Jenkins. When I hit add credentials on the https://github.com/jenkinsci/kubernetes-plugin/blob/master/configuration.png I get
<select class="setting-input dropdownList">
<option value="0">Username with password</option>
<option value="1">Docker Host Certificate Authentication</option>
<option value="2">Kubernetes Service Account</option>
<option value="3">OpenShift OAuth token</option>
<option value="4">OpenShift Username and Password</option>
<option value="5">SSH Username with private key</option>
<option value="6">Secret file</option>
<option value="7">Secret text</option>
<option value="8">Certificate</option></select>
I could not find a clear example how to configure Jenkins Kubernetes Cloud connector to use my Jenkins to authenticate with service account jenkins.
Could you please help me to find step-by-step guide - what kind of of credentials I need?
Regards,
Pavel
The best practice is to launch you Jenkins master pod with the serviceaccount you created, instead of creating credentials in Jenkins
See example yaml
The Kubernetes plugin for Jenkins reads this file /var/run/secrets/kubernetes.io/serviceaccount/token. Please see if your Jenkins pod has this. The service account should have permissions targeting pods in the appropriate namespace.
In fact, we are using Jenkins running outside kubernetes 1.9. We simply picked the default service account token (from default namespace), and put it in that file on the Jenkins master. Restarted ... and the kubernetes token credential type was visible.
We do have a role and rolebinding though:
kubectl create role jenkins --verb=get,list,watch,create,patch,delete --resource=pods
kubectl create rolebinding jenkins --role=jenkins --serviceaccount=default:default
In our case, Jenkins is configured to spin up slave pods in the default namespace. So this combination works.
More questions (similar):
Can I use Jenkins kubernetes plugin when Jenkins server is outside of a kubernetes cluster?
After some digging it appears that the easiest way to go(without giving extra permissions to the default service account for the name space) is to
kubectl -n <your-namespace> create sa jenkins
kubectl create clusterrolebinding jenkins --clusterrole cluster-admin --serviceaccount=<your-namespace>:jenkins
kubectl get -n <your-namespace> sa/jenkins --template='{{range .secrets}}{{ .name }} {{end}}' | xargs -n 1 kubectl -n <your-namespace> get secret --template='{{ if .data.token }}{{ .data.token }}{{end}}' | head -n 1 | base64 -d -
Seems like you can store this token as type Secret text in Jenkins and the plugin is able to pick it up.
Another advantage of this approach compared to overwriting the default service account, as mentioned earlier above is that you can have secret per cluster - meaning you can use one jenkins to connect to for example dev -> quality -> prod namespaces or clusters with separate accounts.
Please feel free to contribute, if you have a better way to go.
Regards,
Pavel
For more details you can check:
- https://gist.github.com/lachie83/17c1fff4eb58cf75c5fb11a4957a64d2
- https://github.com/openshift/origin/issues/6807
I have built a 4 node kubernetes cluster running multi-container pods all running on CoreOS. The images come from public and private repositories. Right now I have to log into each node and manually pull down the images each time I update them. I would like be able to pull them automatically.
I have tried running docker login on each server and putting the .dockercfg file in /root and /core
I have also done the above with the .docker/config.json
I have added secret to the kube master and added imagePullSecrets:
name: docker.io to the Pod configuration file.
When I create the pod i get the error message Error:
image <user/image>:latest not found
If I log in and run docker pull it will pull the image. I have tried this using docker.io and quay.io.
To add to what #rob said, as of docker 1.7, the use of .dockercfg has been deprecated and they now use a ~/.docker/config.json file. There is support for this type of secret in kube 1.1, but you must create it using different keys/type configuration in the yaml:
First, base64 encode your ~/.docker/config.json:
cat ~/.docker/config.json | base64 -w0
Note that the base64 encoding should appear on a single line so with -w0 we disable the wrapping.
Next, create a yaml file:
my-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: <base-64-encoded-json-here>
type: kubernetes.io/dockerconfigjson
-
$ kubectl create -f my-secret.yaml && kubectl get secrets
NAME TYPE DATA
default-token-olob7 kubernetes.io/service-account-token 2
registrypullsecret kubernetes.io/dockerconfigjson 1
Then, in your pod's yaml you need to reference registrypullsecret or create a replication controller:
apiVersion: v1
kind: Pod
metadata:
name: my-private-pod
spec:
containers:
- name: private
image: yourusername/privateimage:version
imagePullSecrets:
- name: registrypullsecret
If you need to pull an image from a private Docker Hub repository, you can use the following.
Create your secret key
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
secret "myregistrykey" created.
Then add the newly created key to your Kubernetes service account.
Retrieve the current service account
kubectl get serviceaccounts default -o yaml > ./sa.yaml
Edit sa.yaml and add the ImagePullSecret after Secrets
imagePullSecrets:
- name: myregistrykey
Update the service account
kubectl replace serviceaccount default -f ./sa.yaml
I can confirm that imagePullSecrets not working with deployment, but you can
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
kubectl edit serviceaccounts default
Add
imagePullSecrets:
- name: myregistrykey
To the end after Secrets, save and exit.
And its works. Tested with Kubernetes 1.6.7
Kubernetes supports a special type of secret that you can create that will be used to fetch images for your pods. More details here.
For centos7, the docker config file is under /root/.dockercfg
echo $(cat /root/.dockercfg) | base64 -w 0
Copy and paste result to secret YAML based on the old format:
apiVersion: v1
kind: Secret
metadata:
name: docker-secret
type: kubernetes.io/dockercfg
data:
.dockercfg: <YOUR_BASE64_JSON_HERE>
And it worked for me, hope that could also help.
The easiest way to create the secret with the same credentials that your docker configuration is with:
kubectl create secret generic myregistry --from-file=.dockerconfigjson=$HOME/.docker/config.json
This already encodes data in base64.
If you can download the images with docker, then kubernetes should be able to download them too. But it is required to add this to your kubernetes objects:
spec:
template:
spec:
imagePullSecrets:
- name: myregistry
containers:
# ...
Where myregistry is the name given in the previous command.
go the easy way, do not forget to define --type and add it to proper namespace
kubectl create secret generic YOURS-SECRET-NAME \
--from-file=.dockerconfigjson=$HOME/.docker/config.json \
--type=kubernetes.io/dockerconfigjson