I have an off-the-shelf Kubernetes cluster running on AWS, installed with the kube-up script. I would like to run some containers that are in a private Docker Hub repository. But I keep getting a "not found" error:
> kubectl get pod
NAME READY STATUS RESTARTS AGE
maestro-kubetest-d37hr 0/1 Error: image csats/maestro:latest not found 0 22m
I've created a secret containing a .dockercfg file. I've confirmed it works by running the script posted here:
> kubectl get secrets docker-hub-csatsinternal -o yaml | grep dockercfg: | cut -f 2 -d : | base64 -D > ~/.dockercfg
> docker pull csats/maestro
latest: Pulling from csats/maestro
I've confirmed I'm not using the new format of .dockercfg script, mine looks like this:
> cat ~/.dockercfg
{"https://index.docker.io/v1/":{"auth":"REDACTED BASE64 STRING HERE","email":"eng#csats.com"}}
I've tried running the Base64 encode on Debian instead of OS X, no luck there. (It produces the same string, as might be expected.)
Here's the YAML for my Replication Controller:
---
kind: "ReplicationController"
apiVersion: "v1"
metadata:
name: "maestro-kubetest"
spec:
replicas: 1
selector:
app: "maestro"
ecosystem: "kubetest"
version: "1"
template:
metadata:
labels:
app: "maestro"
ecosystem: "kubetest"
version: "1"
spec:
imagePullSecrets:
- name: "docker-hub-csatsinternal"
containers:
- name: "maestro"
image: "csats/maestro"
imagePullPolicy: "Always"
restartPolicy: "Always"
dnsPolicy: "ClusterFirst"
kubectl version:
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.3", GitCommit:"61c6ac5f350253a4dc002aee97b7db7ff01ee4ca", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.3", GitCommit:"61c6ac5f350253a4dc002aee97b7db7ff01ee4ca", GitTreeState:"clean"}
Any ideas?
Another possible reason why you might see "image not found" is if the namespace of your secret doesn't match the namespace of the container.
For example, if your Deployment yaml looks like
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mydeployment
namespace: kube-system
Then you must make sure the Secret yaml uses a matching namespace:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: kube-system
data:
.dockerconfigjson: ****
type: kubernetes.io/dockerconfigjson
If you don't specify a namespace for your secret, it will end up in the default namespace and won't get used. There is no warning message. I just spent hours on this issue so I thought I'd share it here in the hope I can save somebody else the time.
Docker generates a config.json file in ~/.docker/
It looks like:
{
"auths": {
"index.docker.io/v1/": {
"auth": "ZmFrZXBhc3N3b3JkMTIK",
"email": "email#company.com"
}
}
}
what you actually want is:
{"https://index.docker.io/v1/": {"auth": "XXXXXXXXXXXXXX", "email": "email#company.com"}}
note 3 things:
1) there is no auths wrapping
2) there is https:// in front of the
URL
3) it's one line
then you base64 encode that and use as data for the .dockercfg name
apiVersion: v1
kind: Secret
metadata:
name: registry
data:
.dockercfg: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
type: kubernetes.io/dockercfg
Note again the .dockercfg line is one line (base64 tends to generate a multi-line string)
Another reason you might see this error is due to using a kubectl version different than the cluster version (e.g. using kubectl 1.9.x against a 1.8.x cluster).
The format of the secret generated by the kubectl create secret docker-registry command has changed between versions.
A 1.8.x cluster expect a secret with the format:
{
"https://registry.gitlab.com":{
"username":"...",
"password":"...",
"email":"...",
"auth":"..."
}
}
But the secret generated by the 1.9.x kubectl has this format:
{
"auths":{
"https://registry.gitlab.com":{
"username":"...",
"password":"...",
"email":"...",
"auth":"..."
}
}
}
So, double check the value of the .dockercfg data of your secret and verify that it matches the format expected by your kubernetes cluster version.
I've been experiencing the same problem. What I did notice is that in the example (https://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod) .dockercfg has the following format:
{
"https://index.docker.io/v1/": {
"auth": "ZmFrZXBhc3N3b3JkMTIK",
"email": "jdoe#example.com"
}
}
While the one generated by docker in my machine looks something like this:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "ZmFrZXBhc3N3b3JkMTIK",
"email": "email#company.com"
}
}
}
By checking at the source code, I found that there is actually a test for this use case (https://github.com/kubernetes/kubernetes/blob/6def707f9c8c6ead44d82ac8293f0115f0e47262/pkg/kubelet/dockertools/docker_test.go#L280)
I confirm you that if you just take and encode "auths", as in the example, it will work for you.
Probably the documentation should be updated. I will raise a ticket on github.
Related
Currently I am handling OIDC using OAuth2-proxy and Istio. We would now like to upgrade to Anthos since we are mainly on GCP. Everything works but I need to configure envoyExtAuthzHttp. Previously I would run kubectl edit configmap istio -n istio-system and add the following...
extensionProviders:
- name: oauth2-proxy
envoyExtAuthzHttp:
service: http-oauth-proxy.istio-system.svc.cluster.local
port: 4180
includeRequestHeadersInCheck: ['cookie']
headersToUpstreamOnAllow: ['authorization']
headersToDownstreamOnDeny: ['content-type', 'set-cookie']
However, ASM does not seem to install that config map...
Error from server (NotFound): configmaps "istio" not found
I noticed there is an istio-asm-managed config map. So I tried adding the config to that but when I do I am not sure how to restart ASM as this command I am used to using isn't working kubectl rollout restart deployment/istiod -n istio-system.
When I try to go to the site instead of being redirected I see...
RBAC: access denied
What worked for me, after studying what asmcli does when you follow the migration steps here, is setting this configmap in istio-system before enabling the Anthos Service Mesh:
apiVersion: v1
data:
mesh: |
extensionProviders:
...<your settings here>...
kind: ConfigMap
metadata:
name: istio-asm-managed-rapid
namespace: istio-system
I have not verified whether it was actually necessary to do this before enabling the ASM, but that is how I did it.
If I have a java configuration bean, saying:
package com.mycompany.app.configuration;
// whatever imports
public class MyConfiguration {
private String someConfigurationValue = "defaultValue";
// getters and setters etc
}
If I set that using jetty for local testing I can do so using a config.xml file in the following form:
<myConfiguration class="com.mycompany.app.configuration.MyConfiguration" context="SomeContextAttribute">
<someConfigurationValue>http://localhost:8080</someConfigurationValue>
</myConfiguration>
However in the deployed environment in which I need to test, I will need to use docker to set these configuration values, we use jboss.
Is there a way to directly set these JNDI values? I've been looking for examples for quite a while but cannot find any. This would be in the context of a yaml file which is used to configure a k8 cluster. Apologies for the psuedocode, I would post the real code but it's all proprietary so I can't.
What I have so far for the overrides.yaml snippet is of the form:
env:
'MyConfig.SomeContextAttribute':
class_name: 'com.mycompany.app.configuration.MyConfiguration'
someConfigurationValue: 'http://localhost:8080'
However this is a complete guess.
You can achieve it by using ConfigMap.
A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
First what you need to create ConfigMap from your file using command as below:
kubectl create configmap <map-name> <data-source>
Where <map-name> is the name you want to assign to the ConfigMap and <data-source> is the directory, file, or literal value to draw the data from. You can read more about it here.
Here is an example:
Download the sample file:
wget https://kubernetes.io/examples/configmap/game.properties
You can check what is inside this file using cat command:
cat game.properties
You will see that there are some variables in this file:
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30r
Create the ConfigMap from this file:
kubectl create configmap game-config --from-file=game.properties
You should see output that ConfigMap has been created:
configmap/game-config created
You can display details of the ConfigMap using command below:
kubectl describe configmaps game-config
You will see output as below:
Name: game-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
game.properties:
----
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
You can also see how yaml of this ConfigMap will look using:
kubectl get configmaps game-config -o yaml
The output will be similar:
apiVersion: v1
data:
game.properties: |-
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
kind: ConfigMap
metadata:
creationTimestamp: "2022-01-28T12:33:33Z"
name: game-config
namespace: default
resourceVersion: "2692045"
uid: 5eed4d9d-0d38-42af-bde2-5c7079a48518
Next goal is connecting ConfigMap to Pod. It could be added in yaml file of Podconfiguration.
As you can see under containersthere is envFrom section. As name is a name of ConfigMapwhich I created in previous step. You can read about envFrom here
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: nginx
envFrom:
- configMapRef:
name: game-config
Create a Pod from yaml file using:
kubectl apply -f <name-of-your-file>.yaml
Final step is checking environment variables in this Pod using below command:
kubectl exec -it test-pod -- env
As you can see below, there are environment variables from simple file which I downloaded in the first step:
game.properties=enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
The way to do this is as follows:
If you are attempting to set a value that looks like this in terms of fully qualified name:
com.mycompany.app.configuration.MyConfiguration#someConfigurationValue
Then that will look like the following in a yaml file:
com_mycompany_app_configuration_MyConfiguration_someConfigurationValue: 'blahValue'
It really is that simple. It does need to be set as an environment variable in the yaml, but I'm not sure whether it needs to be under env: or if that's specific to us.
I don't think there's a way of setting something in YAML that in XML would be an attribute, however. I've tried figuring that part out, but I haven't been able to.
I am trying to complete the KNative tutorial for deploying this tutorial : https://knative.dev/docs/serving/samples/hello-world/helloworld-ruby/
I have a url upon completion however, the page is not reachable. I am getting 404 not found.
When I run: kubectl get all , I get the following:
NAME CONFIG NAME K8S SERVICE NAME GENERATION READY REASON ACTUAL REPLICAS DESIRED REPLICAS
revision.serving.knative.dev/helloworld-go-00001 helloworld-go helloworld-go-00001 1 True 0 0
revision.serving.knative.dev/sample-app-00001 sample-app 1 False ContainerMissing
revision.serving.knative.dev/sample-app-00002 sample-app 2 False ContainerMissing
Which leaves me to believe that there is something wrong with the image url specified in my yaml file.
My yaml looks like this:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: sample-app
namespace: default
spec:
template:
spec:
containers:
- image: docker.io/adet4ever/sample-app
env:
- name: TARGET
value: "Ruby Sample v1"
I noticed that the only time a different app works is when I am ableto wget the image url like in the example here:https://docs.openshift.com/container-platform/4.1/serverless/getting-started-knative-services.html
I cannot wget this url and not sure why: docker.io/adet4ever/sample-app
I created a docker hub account and pushed the image. I dont know if I am missing anything else.
Thanks for helping as I have spent 2 days trying to fix this problem.
I have successfully installed my helm chart and was trying to change the value of my image version by doing:
helm upgrade --set myAppVersion=1.0.7 myApp . --atomic --reuse-values
This upgrade fails with this error:
The command fails with this error:
Error: UPGRADE FAILED: an error occurred while rolling back the
release. original upgrade error: cannot patch "myappsecret" with
kind Secret: Operation cannot be fulfilled on secrets
"myappsecret": the object has been modified; please apply your
changes to the latest version and try again: cannot patch
"myappsecret" with kind Secret: Operation cannot be fulfilled on
secrets "diffgramsecret": the object has been modified; please apply
your changes to the latest version and try again
It's somehow related to a secret I have in my deployment:
This its the yaml file of the secret:
apiVersion: v1
data:
.dockerconfigjson: {{ .Values.imagePullCredentials.gcrCredentials }}
kind: Secret
metadata:
creationTimestamp: "2021-01-20T22:54:29Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:.dockerconfigjson: {}
f:type: {}
manager: kubectl
operation: Update
time: "2021-01-20T22:54:29Z"
name: myappsecret
namespace: default
resourceVersion: "2073"
uid: 7c99cb08-5576-4fa3-b6f9-d8d11d76d32c
type: kubernetes.io/dockerconfigjson
This secret is used on my deployments to fetch the docker image from our GCR docker registry.
I'm not sure why this is causing problems, because the only value I'm changing is the docker image tag.
Can anybody help me with this?
I have built a 4 node kubernetes cluster running multi-container pods all running on CoreOS. The images come from public and private repositories. Right now I have to log into each node and manually pull down the images each time I update them. I would like be able to pull them automatically.
I have tried running docker login on each server and putting the .dockercfg file in /root and /core
I have also done the above with the .docker/config.json
I have added secret to the kube master and added imagePullSecrets:
name: docker.io to the Pod configuration file.
When I create the pod i get the error message Error:
image <user/image>:latest not found
If I log in and run docker pull it will pull the image. I have tried this using docker.io and quay.io.
To add to what #rob said, as of docker 1.7, the use of .dockercfg has been deprecated and they now use a ~/.docker/config.json file. There is support for this type of secret in kube 1.1, but you must create it using different keys/type configuration in the yaml:
First, base64 encode your ~/.docker/config.json:
cat ~/.docker/config.json | base64 -w0
Note that the base64 encoding should appear on a single line so with -w0 we disable the wrapping.
Next, create a yaml file:
my-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: <base-64-encoded-json-here>
type: kubernetes.io/dockerconfigjson
-
$ kubectl create -f my-secret.yaml && kubectl get secrets
NAME TYPE DATA
default-token-olob7 kubernetes.io/service-account-token 2
registrypullsecret kubernetes.io/dockerconfigjson 1
Then, in your pod's yaml you need to reference registrypullsecret or create a replication controller:
apiVersion: v1
kind: Pod
metadata:
name: my-private-pod
spec:
containers:
- name: private
image: yourusername/privateimage:version
imagePullSecrets:
- name: registrypullsecret
If you need to pull an image from a private Docker Hub repository, you can use the following.
Create your secret key
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
secret "myregistrykey" created.
Then add the newly created key to your Kubernetes service account.
Retrieve the current service account
kubectl get serviceaccounts default -o yaml > ./sa.yaml
Edit sa.yaml and add the ImagePullSecret after Secrets
imagePullSecrets:
- name: myregistrykey
Update the service account
kubectl replace serviceaccount default -f ./sa.yaml
I can confirm that imagePullSecrets not working with deployment, but you can
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
kubectl edit serviceaccounts default
Add
imagePullSecrets:
- name: myregistrykey
To the end after Secrets, save and exit.
And its works. Tested with Kubernetes 1.6.7
Kubernetes supports a special type of secret that you can create that will be used to fetch images for your pods. More details here.
For centos7, the docker config file is under /root/.dockercfg
echo $(cat /root/.dockercfg) | base64 -w 0
Copy and paste result to secret YAML based on the old format:
apiVersion: v1
kind: Secret
metadata:
name: docker-secret
type: kubernetes.io/dockercfg
data:
.dockercfg: <YOUR_BASE64_JSON_HERE>
And it worked for me, hope that could also help.
The easiest way to create the secret with the same credentials that your docker configuration is with:
kubectl create secret generic myregistry --from-file=.dockerconfigjson=$HOME/.docker/config.json
This already encodes data in base64.
If you can download the images with docker, then kubernetes should be able to download them too. But it is required to add this to your kubernetes objects:
spec:
template:
spec:
imagePullSecrets:
- name: myregistry
containers:
# ...
Where myregistry is the name given in the previous command.
go the easy way, do not forget to define --type and add it to proper namespace
kubectl create secret generic YOURS-SECRET-NAME \
--from-file=.dockerconfigjson=$HOME/.docker/config.json \
--type=kubernetes.io/dockerconfigjson