I am trying to generate a service account token on a kubernetes cluster for API authentication. The operation succeeds and the secret is created but there is no token generated. What could I be missing here?
{
"kind": "Secret",
"apiVersion": "v1",
"metadata": {
"name": "defaultsecret1",
"annotations": {
"kubernetes.io/service-account.name": "cfme"
}
},
"type": "kubernetes.io/service-account-token"
}
[root#atomic001 ~]# kubectl create -f secret.json
secret "defaultsecret1" created
[root#atomic001 ~]# kubectl get secret defaultsecret1
NAME TYPE **DATA** AGE
defaultsecret1 kubernetes.io/service-account-token **0** 13s
[root#atomic001 ~]# kubectl describe secret defaultsecret1
Name: defaultsecret1
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name=core1
Type: kubernetes.io/service-account-token
Data
====
<--- token should be here
[root#atomic001 ~]#
Been up and down and all around on this. Any help is appreciated.
I figured this out.
I had to generate a private key with openssl and then point to it in the controller-manager configuration file. Now the tokens are being created.
KUBE_CONTROLLER_MANAGER_ARGS="--service-account-private-key-file=/etc/kubernetes/serviceaccount.key"
Related
I'm creating a kind cluster with kind create cluster --name kind and I want to access it from another docker container but when I try to apply a Kubernetes file from a container (kubectl apply -f deployment.yml) I got this error:
The connection to the server 127.0.0.1:6445 was refused - did you specify the right host or port?
Indeed when I try to curl kind control-plane from a container, it's unreachable.
> docker run --entrypoint curl curlimages/curl:latest 127.0.0.1:6445
curl: (7) Failed to connect to 127.0.0.1 port 6445 after 0 ms: Connection refused
However kind control-plane is publishing to the right port but only to the localhost.
> docker ps --format "table {{.Image}}\t{{.Ports}}"
IMAGE PORTS
kindest/node:v1.23.4 127.0.0.1:6445->6443/tcp
Currently the only solution I found is to set the host network mode.
> docker run --network host --entrypoint curl curlimages/curl:latest 127.0.0.1:6445
Client sent an HTTP request to an HTTPS server.
This solution don't look to be the most secure. Is there another way like connecting the kind network to my container or something like that that I missed ?
Don't have enough rep to comment on the other answer, but wanted to comment on what ultimately worked for me.
Takeaways
Kind cluster running in it own bridge network kind
Service with kubernetes client running in another container with a mounted kube config volume
As described above the containers need to be in the same network unless you want your service to run in the host network.
The server address for the kubeconfig is the container name + internal port e.g. kind-control-plane:6443. The port is NOT the exposed port in the example below 6443 NOT 38669
CONTAINER ID IMAGE PORTS
7f2ee0c1bd9a kindest/node:v1.25.3 127.0.0.1:38669->6443/tcp
Kube config for the container
# path/to/some/kube/config
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true # Don't use in Prod equivalent of --insecure on cli
server: https://<kind-control-plane container name>:6443 # NOTE port is internal container port
name: kind-kind # or whatever
contexts:
- context:
cluster: kind-kind
user: <some-service-account>
name: kind-kind # or whatever
current-context: kind-kind
kind: Config
preferences: {}
users:
- name: <some-service-account>
user:
token: <TOKEN>
Docker container stuff
If using docker-compose you can add the kind network to the container such as:
#docker-compose.yml
services:
foobar:
build:
context: ./.config
networks:
- kind # add this container to the kind network
volumes:
- path/to/some/kube/config:/somewhere/in/the/container
networks:
kind: # define the kind network
external: true # specifies that the network already exists in docker
If running a new container:
docker run --network kind -v path/to/some/kube/config:/somewhere/in/the/container <image>
Container already running?
docker network connect kind <container name>
I don't know exactly why you want to do this. but no problem I think this could help you:
first, lets pull your docker image:
❯ docker pull curlimages/curl
In my kind cluster I got 3 control plane nodes and 3 worker nodes. Here are the pod of my kind cluster:
❯ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39dbbb8ca320 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 127.0.0.1:35327->6443/tcp so-cluster-1-control-plane
62b5538275e9 kindest/haproxy:v20220207-ca68f7d4 "haproxy -sf 7 -W -d…" 7 days ago Up 7 days 127.0.0.1:35625->6443/tcp so-cluster-1-external-load-balancer
9f189a1b6c52 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 127.0.0.1:40845->6443/tcp so-cluster-1-control-plane3
4c53f745a6ce kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 127.0.0.1:36153->6443/tcp so-cluster-1-control-plane2
97e5613d2080 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 0.0.0.0:30081->30080/tcp so-cluster-1-worker2
0ca64a907707 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 0.0.0.0:30080->30080/tcp so-cluster-1-worker
9c5d26caee86 kindest/node:v1.23.5 "/usr/local/bin/entr…" 7 days ago Up 7 days 0.0.0.0:30082->30080/tcp so-cluster-1-worker3
The container that is interesting for us here is the haproxy one (kindest/haproxy:v20220207-ca68f7d4) which have the role of loadbalancing the enterring traffic to the nodes (and, in our example, especially the control plane nodes.) we can see that the port 35625 of our host machine is mapped to the port 6443 of the haproxy container. (127.0.0.1:35625->6443/tcp)
so, our cluster endpoint is https://127.0.0.1:35625, we can confirm this in our kubeconfig file (~/.kube/config):
❯ cat .kube/config
apiVersion: v1
kind: Config
preferences: {}
users:
- name: kind-so-cluster-1
user:
client-certificate-data: <base64data>
client-key-data: <base64data>
clusters:
- cluster:
certificate-authority-data: <certificate-authority-dataBase64data>
server: https://127.0.0.1:35625
name: kind-so-cluster-1
contexts:
- context:
cluster: kind-so-cluster-1
user: kind-so-cluster-1
namespace: so-tests
name: kind-so-cluster-1
current-context: kind-so-cluster-1
let's run the curl container in background:
❯ docker run -d --network host curlimages/curl sleep 3600
ba183fe2bb8d715ed1e503a9fe8096dba377f7482635eb12ce1322776b7e2366
as expected, we cant HTTP request the endpoint that listen on an HTTPS port:
❯ docker exec -it ba curl 127.0.0.1:35625
Client sent an HTTP request to an HTTPS server.
we can try to use the certificate that is in the field "certificate-authority-data" in our kubeconfig to check if that change something (it should):
Lets create a file named my-ca.crt that contain the stringData of the certificate:
base64 -d <<< <certificate-authority-dataBase64dataFromKubeConfig> > my-ca.crt
since the working directory of the curl docker image is "/" lets copy our cert to this location in the container and verify that it is actually there:
docker cp my-ca.crt ba183fe:/
❯ docker exec -it ba sh
/ $ ls my-ca.crt
my-ca.crt
Let's try again our curl request but with the certificate:
❯ docker exec -it ba curl --cacert my-ca.crt https://127.0.0.1:35625
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
YOU, can get the same result by adding the "--insecure" flag to your curl request:
❯ docker exec -it ba curl https://127.0.0.1:35625 --insecure
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
However, we can't access our cluster with anonymous user ! So lets get a token from kubernetes (cf https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/):
# Create a secret to hold a token for the default service account
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: default-token
annotations:
kubernetes.io/service-account.name: default
type: kubernetes.io/service-account-token
EOF
Once the token controller has populated the secret with a token:
# Get the token value
❯ kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode
eyJhbGciOiJSUzI1NiIsImtpZCI6InFSTThZZ05lWHFXMWExQlVSb1hTcHNxQ3F6Z2Z2aWpUaUYwd2F2TGdVZ0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzby10ZXN0cyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzYzY0OTg1OS0xNzkyLTQzYTQtOGJjOC0zMDEzZDgxNjRmY2IiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c28tdGVzdHM6ZGVmYXVsdCJ9.VLfjuym0fohYTT_uoLPwM0A6u7dUt2ciWZF2K9LM_YvQ0UZT4VgkM8UBVOQpWjTmf9s2B5ZxaOkPu4cz_B4xyDLiiCgqiHCbUbjxE9mphtXGKQwAeKLvBlhbjYnHb9fCTRW19mL7VhqRgfz5qC_Tae7ysD3uf91FvqjjxsCyzqSKlsq0T7zXnzQ_YQYoUplGa79-LS_xDwG-2YFXe0RfS9hkpCILpGDqhLXci_gwP9DW0a6FM-L1R732OdGnb9eCPI6ReuTXQz7naQ4RQxZSIiNd_S7Vt0AYEg-HGvSkWDl0_DYIyHShMeFHu1CtfTZS5xExoY4-_LJD8mi
Now lets execute the curl command directly with the token !
❯ docker exec -it ba curl -X GET https://127.0.0.1:35625/api --header "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6InFSTThZZ05lWHFXMWExQlVSb1hTcHNxQ3F6Z2Z2aWpUaUYwd2F2TGdVZ0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJzby10ZXN0cyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzYzY0OTg1OS0xNzkyLTQzYTQtOGJjOC0zMDEzZDgxNjRmY2IiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c28tdGVzdHM6ZGVmYXVsdCJ9.VLfjuym0fohYTT_uoLPwM0A6u7dUt2ciWZF2K9LM_YvQ0UZT4VgkM8UBVOQpWjTmf9s2B5ZxaOkPu4cz_B4xyDLiiCgqiHCbUbjxE9mphtXGKQwAeKLvBlhbjYnHb9fCTRW19mL7VhqRgfz5qC_Tae7ysD3uf91FvqjjxsCyzqSKlsq0T7zXnzQ_YQYoUplGa79-LS_xDwG-2YFXe0RfS9hkpCILpGDqhLXci_gwP9DW0a6FM-L1R732OdGnb9eCPI6ReuTXQz7naQ4RQxZSIiNd_S7Vt0AYEg-HGvSkWDl0_DYIyHShMeFHu1CtfTZS5xExoY4-_LJD8mi" --insecure
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "172.18.0.5:6443"
}
]
}
It works !
I still don't know why you want to do this but I hope that this helped you.
Since It's not what you wanted because here I use host network, You can use this : How to communicate between Docker containers via "hostname" as proposed #SergioSantiago thanks for your comment !
bguess
I have a template that defines a object "ImageStream":
{
"apiVersion":"v1",
"kind": "ImageStream",
"metadata": {
"name": "${APPLICATION_NAME}-img",
"labels": {
"app": "${APPLICATION_NAME}"
}
},
"spec": {
"tags": [
{
"name": "latest",
"from": {
"kind": "DockerImage",
"name": "rlanhellas/${APPLICATION_NAME}"
}
}
]
}
}
So, after created I got the image inside openshift registry, the oc get is command return this:
$ oc get is
NAME IMAGE REPOSITORY TAGS UPDATED
safepark-netcore-img default-route-openshift-image-registry.apps.us-east-2.starter.openshift-online.com/safepark/safepark-netcore-img latest About an hour ago
My original image within dockerhub and my pipeline tool always update the latest tag in dockerhub. But the ImageStream in openshift is not updated, so I got always a old version of my image in openshift and a new build is never triggered because the openshift image is not updated.
How can I "link" the ImageStream in OpenShift to my Dockerhub image and ensure that updated image in dockerhub will update the image in openshift ?
Important: I'm using Openshift Online with Free Plan.
If you want an image to automatically sync from one registry to your openshift registry, you can use importPolicy to achieve this.
The OpenShift 3.11 documentation explains the importPolicy functionality.
Set importPolicy to true to automatically sync the image.
apiVersion: v1
kind: ImageStream
metadata:
name: ruby
spec:
tags:
- from:
kind: DockerImage
name: openshift/ruby-20-centos7
name: latest
importPolicy:
scheduled: true
apiVersion: v1
kind: ImageStream
metadata:
name: ruby
spec:
tags:
- from:
kind: DockerImage
name: openshift/ruby-20-centos7
name: latest
importPolicy:
scheduled: true
I am following the article mentioned below for creating dynamic persistent volume claims.
https://learn.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv
I created a Persistent volume claim using :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: taskmanager-01
spec:
accessModes:
- ReadWriteOnce
storageClassName: managed-premium
resources:
requests:
storage: 16Gi
Question 01 :
From what I understand, the Persistent Volume and the actual underlying disk will be provisioned when this is created.
Is this correct ?
Question 02 :
kubectl get pvc -n <namespace>
returns me the status of my PVC as Pending. I get the following errors in the kubernetes event list
Failed to provision volume with StorageClass "managed-premium":
azure.BearerAuthorizer#WithAuthorization:
Failed to refresh the Token for request to
https://management.azure.com/subscriptions/xxxx/resourceGroups/MC_XXXX/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-651cef89-49ae-11e9-8104-0a58ac1f222a?api-version=2016-04-30-preview:
StatusCode=401
Original Error: adal: Refresh request failed. Status Code = '401'.
{
"error": "invalid_client",
"error_description": "AADSTS7000215: Invalid client secret is provided.\r\n
Trace ID: xxxx\r\nCorrelation ID: xxxxr\nTimestamp: 2019-03-18 18:49:42Z",
"error_codes": [
7000215
],
"timestamp": "2019-03-18 18:49:42Z",
"trace_id": "xxxx",
"correlation_id": "xxxx"
}
yes, with dynamic it will get provisioned on the fly
pretty sure this error means your service principal doesnt have permissions to the resource group or its secret is expired.
one way to check that would be to find that information from the AKS resource (under servicePrincipalProfile >> clientId. using say az aks list -g %resource-group%) and check if it has permissions to the resource group. if it does, you can try rotating the secret to a new one
https://learn.microsoft.com/en-us/azure/aks/update-credentials
I have a brand new Kubernetes v1.8 cluster with two nodes (RBAC enabled). Jenkins is deployed as a StatefulSet and recommended ServiceAccount/Role and RoleBindings were created as well (from here). Cluster info:
$ kubectl cluster-info
Kubernetes master is running at https://10.182.255.35:6443
When I'm trying to set up Kubernetes cloud in Jenkins settings I'm getting an error 403 (Forbidden). I followed pugin guide and created 'Kubernetes Service Account' credentials in Jenkins and trying to configure new cloud. Jenkins configuration screenshot. Here is the debug log from plugin:
Nov 02, 2017 7:40:57 PM FINE org.csanchez.jenkins.plugins.kubernetes.KubernetesFactoryAdapter
Creating Kubernetes client: KubernetesFactoryAdapter [serviceAddress=https://10.182.255.35:6443, namespace=default, caCertData=null, credentials=org.csanchez.jenkins.plugins.kubernetes.ServiceAccountCredential#99ee54b6, skipTlsVerify=true, connectTimeout=0, readTimeout=0]
Nov 02, 2017 7:40:57 PM FINE org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud
Error connecting to https://10.182.255.35:6443
java.io.IOException: Unexpected response code for CONNECT: 403
at okhttp3.internal.connection.RealConnection.createTunnel(RealConnection.java:371)
...(skipped)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:605)
Caused: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [list] for kind: [Pod] with name: [null] in namespace: [default] failed.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62)
...(skipped)
At the same time if I try to make an API call using this serviceAccount from the pod, it's working:
$ kubectl exec -ti jenkins-0 bash (ssh into the pod)
bash-4.3$ KUBE_TOKEN=$(</var/run/secrets/kubernetes.io/serviceaccount/token)
bash-4.3$ curl -sSk -H "Authorization: Bearer $KUBE_TOKEN"
https://10.182.255.35:6443/api/v1/namespaces/default/pods
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/default/pods",
"resourceVersion": "90645"
},
"items": [
{
...(skipped)
Answering my own question: the problem was with my proxy settings. You need to specify instance IP in no_proxy environment variable during cluster setup.
I don't have enough points to vote up, but I just want to confirm that this was related to proxy settings as mentioned by #Symydo. So either add the IP instance in the NO_PROXY env variable of the Pod or remove proxy settings if not necessary.
I have an off-the-shelf Kubernetes cluster running on AWS, installed with the kube-up script. I would like to run some containers that are in a private Docker Hub repository. But I keep getting a "not found" error:
> kubectl get pod
NAME READY STATUS RESTARTS AGE
maestro-kubetest-d37hr 0/1 Error: image csats/maestro:latest not found 0 22m
I've created a secret containing a .dockercfg file. I've confirmed it works by running the script posted here:
> kubectl get secrets docker-hub-csatsinternal -o yaml | grep dockercfg: | cut -f 2 -d : | base64 -D > ~/.dockercfg
> docker pull csats/maestro
latest: Pulling from csats/maestro
I've confirmed I'm not using the new format of .dockercfg script, mine looks like this:
> cat ~/.dockercfg
{"https://index.docker.io/v1/":{"auth":"REDACTED BASE64 STRING HERE","email":"eng#csats.com"}}
I've tried running the Base64 encode on Debian instead of OS X, no luck there. (It produces the same string, as might be expected.)
Here's the YAML for my Replication Controller:
---
kind: "ReplicationController"
apiVersion: "v1"
metadata:
name: "maestro-kubetest"
spec:
replicas: 1
selector:
app: "maestro"
ecosystem: "kubetest"
version: "1"
template:
metadata:
labels:
app: "maestro"
ecosystem: "kubetest"
version: "1"
spec:
imagePullSecrets:
- name: "docker-hub-csatsinternal"
containers:
- name: "maestro"
image: "csats/maestro"
imagePullPolicy: "Always"
restartPolicy: "Always"
dnsPolicy: "ClusterFirst"
kubectl version:
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.3", GitCommit:"61c6ac5f350253a4dc002aee97b7db7ff01ee4ca", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.3", GitCommit:"61c6ac5f350253a4dc002aee97b7db7ff01ee4ca", GitTreeState:"clean"}
Any ideas?
Another possible reason why you might see "image not found" is if the namespace of your secret doesn't match the namespace of the container.
For example, if your Deployment yaml looks like
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mydeployment
namespace: kube-system
Then you must make sure the Secret yaml uses a matching namespace:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: kube-system
data:
.dockerconfigjson: ****
type: kubernetes.io/dockerconfigjson
If you don't specify a namespace for your secret, it will end up in the default namespace and won't get used. There is no warning message. I just spent hours on this issue so I thought I'd share it here in the hope I can save somebody else the time.
Docker generates a config.json file in ~/.docker/
It looks like:
{
"auths": {
"index.docker.io/v1/": {
"auth": "ZmFrZXBhc3N3b3JkMTIK",
"email": "email#company.com"
}
}
}
what you actually want is:
{"https://index.docker.io/v1/": {"auth": "XXXXXXXXXXXXXX", "email": "email#company.com"}}
note 3 things:
1) there is no auths wrapping
2) there is https:// in front of the
URL
3) it's one line
then you base64 encode that and use as data for the .dockercfg name
apiVersion: v1
kind: Secret
metadata:
name: registry
data:
.dockercfg: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
type: kubernetes.io/dockercfg
Note again the .dockercfg line is one line (base64 tends to generate a multi-line string)
Another reason you might see this error is due to using a kubectl version different than the cluster version (e.g. using kubectl 1.9.x against a 1.8.x cluster).
The format of the secret generated by the kubectl create secret docker-registry command has changed between versions.
A 1.8.x cluster expect a secret with the format:
{
"https://registry.gitlab.com":{
"username":"...",
"password":"...",
"email":"...",
"auth":"..."
}
}
But the secret generated by the 1.9.x kubectl has this format:
{
"auths":{
"https://registry.gitlab.com":{
"username":"...",
"password":"...",
"email":"...",
"auth":"..."
}
}
}
So, double check the value of the .dockercfg data of your secret and verify that it matches the format expected by your kubernetes cluster version.
I've been experiencing the same problem. What I did notice is that in the example (https://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod) .dockercfg has the following format:
{
"https://index.docker.io/v1/": {
"auth": "ZmFrZXBhc3N3b3JkMTIK",
"email": "jdoe#example.com"
}
}
While the one generated by docker in my machine looks something like this:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "ZmFrZXBhc3N3b3JkMTIK",
"email": "email#company.com"
}
}
}
By checking at the source code, I found that there is actually a test for this use case (https://github.com/kubernetes/kubernetes/blob/6def707f9c8c6ead44d82ac8293f0115f0e47262/pkg/kubelet/dockertools/docker_test.go#L280)
I confirm you that if you just take and encode "auths", as in the example, it will work for you.
Probably the documentation should be updated. I will raise a ticket on github.