I've been searching and trying to see how I can get the namespace name of the openshift project to append to the build config so I can pass it in as a parameter when generating an image to our jfrog artifactory. Below is a snippet of the buildconfig of what I have
apiVersion: v1
metadata:
name: "${COMPONENT_NAME}"
env:
- name: APP_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
And withn the same config template I am trying to reference it or use within a url
output:
to:
kind: DockerImage
name:
artifactory.example.com/${JFROG_REPO}/$(APP_POD_NAMESPACE)/${COMPONENT_NAME}:${COMPONENT_TAG}
I have changed the brackets back and forth from ( to {.
This is the error I'm receiving when deploying through my pipeline
spec.output.to.name: Invalid value: "artifactory.example.com/jfrog_repo_name/$(APP_POD_NAMESPACE)/microservice_name:1.0.0": name is not a valid Docker pull specification: invalid reference format
Obviously this tells me that it's not able to obtain the environment variable. This template is also being used for Openshift v4.8
Related
I am currently having trouble deploying my applications with helm on argocd.
I use an Application ressource and I will go to an ApplicationSet next, that I would copy to you in which I must call on values.yml from another repository in my gitlab.
I try to put the link of the repo directly but it does not work.
I haven't found any other solutions to use values files from another gitlab repository.
Can you help me ?
Thanks in advance !
My code :
My Application ressource file :
`
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: react-docker-app
namespace: argocd
spec:
syncPolicy:
automated:
selfHeal: true
project: default
destination:
server: https://kubernetes.default.svc
namespace: argocd
source:
repoURL: https://gitlab.com/api/v4/projects/40489526/packages/helm/stable
targetRevision: 0.8.0
chart: react-chart
helm:
valueFiles:
- https://gitlab.com/maxdev42-gitops-projects/reactdockerapp2/-/blob/master/deployment/valtues.yaml
`
My values.yml from an another repository :
`
image:
repository: registry.gitlab.com/maxdev42/react-docker-app
tag: "appv8"
`
I'm trying to use value files from other gitlab repositories to deploy my application on argocd with Helm.
The word you are looking for is OTS (off-the-shelf).
Here you have an example: https://github.com/argoproj/argocd-example-apps/tree/master/helm-dependency
Shortly you have to define a new Chart in your repo where you have custom values.yaml referring to a Chart from https://gitlab.com/api/v4/projects/40489526/packages/helm/stable as dependency.
values.yaml should be changed to:
react-chart:
image:
repository: registry.gitlab.com/maxdev42/react-docker-app
tag: "appv7"
If I have a java configuration bean, saying:
package com.mycompany.app.configuration;
// whatever imports
public class MyConfiguration {
private String someConfigurationValue = "defaultValue";
// getters and setters etc
}
If I set that using jetty for local testing I can do so using a config.xml file in the following form:
<myConfiguration class="com.mycompany.app.configuration.MyConfiguration" context="SomeContextAttribute">
<someConfigurationValue>http://localhost:8080</someConfigurationValue>
</myConfiguration>
However in the deployed environment in which I need to test, I will need to use docker to set these configuration values, we use jboss.
Is there a way to directly set these JNDI values? I've been looking for examples for quite a while but cannot find any. This would be in the context of a yaml file which is used to configure a k8 cluster. Apologies for the psuedocode, I would post the real code but it's all proprietary so I can't.
What I have so far for the overrides.yaml snippet is of the form:
env:
'MyConfig.SomeContextAttribute':
class_name: 'com.mycompany.app.configuration.MyConfiguration'
someConfigurationValue: 'http://localhost:8080'
However this is a complete guess.
You can achieve it by using ConfigMap.
A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
First what you need to create ConfigMap from your file using command as below:
kubectl create configmap <map-name> <data-source>
Where <map-name> is the name you want to assign to the ConfigMap and <data-source> is the directory, file, or literal value to draw the data from. You can read more about it here.
Here is an example:
Download the sample file:
wget https://kubernetes.io/examples/configmap/game.properties
You can check what is inside this file using cat command:
cat game.properties
You will see that there are some variables in this file:
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30r
Create the ConfigMap from this file:
kubectl create configmap game-config --from-file=game.properties
You should see output that ConfigMap has been created:
configmap/game-config created
You can display details of the ConfigMap using command below:
kubectl describe configmaps game-config
You will see output as below:
Name: game-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
game.properties:
----
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
You can also see how yaml of this ConfigMap will look using:
kubectl get configmaps game-config -o yaml
The output will be similar:
apiVersion: v1
data:
game.properties: |-
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
kind: ConfigMap
metadata:
creationTimestamp: "2022-01-28T12:33:33Z"
name: game-config
namespace: default
resourceVersion: "2692045"
uid: 5eed4d9d-0d38-42af-bde2-5c7079a48518
Next goal is connecting ConfigMap to Pod. It could be added in yaml file of Podconfiguration.
As you can see under containersthere is envFrom section. As name is a name of ConfigMapwhich I created in previous step. You can read about envFrom here
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: nginx
envFrom:
- configMapRef:
name: game-config
Create a Pod from yaml file using:
kubectl apply -f <name-of-your-file>.yaml
Final step is checking environment variables in this Pod using below command:
kubectl exec -it test-pod -- env
As you can see below, there are environment variables from simple file which I downloaded in the first step:
game.properties=enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
The way to do this is as follows:
If you are attempting to set a value that looks like this in terms of fully qualified name:
com.mycompany.app.configuration.MyConfiguration#someConfigurationValue
Then that will look like the following in a yaml file:
com_mycompany_app_configuration_MyConfiguration_someConfigurationValue: 'blahValue'
It really is that simple. It does need to be set as an environment variable in the yaml, but I'm not sure whether it needs to be under env: or if that's specific to us.
I don't think there's a way of setting something in YAML that in XML would be an attribute, however. I've tried figuring that part out, but I haven't been able to.
I am trying to complete the KNative tutorial for deploying this tutorial : https://knative.dev/docs/serving/samples/hello-world/helloworld-ruby/
I have a url upon completion however, the page is not reachable. I am getting 404 not found.
When I run: kubectl get all , I get the following:
NAME CONFIG NAME K8S SERVICE NAME GENERATION READY REASON ACTUAL REPLICAS DESIRED REPLICAS
revision.serving.knative.dev/helloworld-go-00001 helloworld-go helloworld-go-00001 1 True 0 0
revision.serving.knative.dev/sample-app-00001 sample-app 1 False ContainerMissing
revision.serving.knative.dev/sample-app-00002 sample-app 2 False ContainerMissing
Which leaves me to believe that there is something wrong with the image url specified in my yaml file.
My yaml looks like this:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: sample-app
namespace: default
spec:
template:
spec:
containers:
- image: docker.io/adet4ever/sample-app
env:
- name: TARGET
value: "Ruby Sample v1"
I noticed that the only time a different app works is when I am ableto wget the image url like in the example here:https://docs.openshift.com/container-platform/4.1/serverless/getting-started-knative-services.html
I cannot wget this url and not sure why: docker.io/adet4ever/sample-app
I created a docker hub account and pushed the image. I dont know if I am missing anything else.
Thanks for helping as I have spent 2 days trying to fix this problem.
I have an off-the-shelf Kubernetes cluster running on AWS, installed with the kube-up script. I would like to run some containers that are in a private Docker Hub repository. But I keep getting a "not found" error:
> kubectl get pod
NAME READY STATUS RESTARTS AGE
maestro-kubetest-d37hr 0/1 Error: image csats/maestro:latest not found 0 22m
I've created a secret containing a .dockercfg file. I've confirmed it works by running the script posted here:
> kubectl get secrets docker-hub-csatsinternal -o yaml | grep dockercfg: | cut -f 2 -d : | base64 -D > ~/.dockercfg
> docker pull csats/maestro
latest: Pulling from csats/maestro
I've confirmed I'm not using the new format of .dockercfg script, mine looks like this:
> cat ~/.dockercfg
{"https://index.docker.io/v1/":{"auth":"REDACTED BASE64 STRING HERE","email":"eng#csats.com"}}
I've tried running the Base64 encode on Debian instead of OS X, no luck there. (It produces the same string, as might be expected.)
Here's the YAML for my Replication Controller:
---
kind: "ReplicationController"
apiVersion: "v1"
metadata:
name: "maestro-kubetest"
spec:
replicas: 1
selector:
app: "maestro"
ecosystem: "kubetest"
version: "1"
template:
metadata:
labels:
app: "maestro"
ecosystem: "kubetest"
version: "1"
spec:
imagePullSecrets:
- name: "docker-hub-csatsinternal"
containers:
- name: "maestro"
image: "csats/maestro"
imagePullPolicy: "Always"
restartPolicy: "Always"
dnsPolicy: "ClusterFirst"
kubectl version:
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.3", GitCommit:"61c6ac5f350253a4dc002aee97b7db7ff01ee4ca", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.3", GitCommit:"61c6ac5f350253a4dc002aee97b7db7ff01ee4ca", GitTreeState:"clean"}
Any ideas?
Another possible reason why you might see "image not found" is if the namespace of your secret doesn't match the namespace of the container.
For example, if your Deployment yaml looks like
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mydeployment
namespace: kube-system
Then you must make sure the Secret yaml uses a matching namespace:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: kube-system
data:
.dockerconfigjson: ****
type: kubernetes.io/dockerconfigjson
If you don't specify a namespace for your secret, it will end up in the default namespace and won't get used. There is no warning message. I just spent hours on this issue so I thought I'd share it here in the hope I can save somebody else the time.
Docker generates a config.json file in ~/.docker/
It looks like:
{
"auths": {
"index.docker.io/v1/": {
"auth": "ZmFrZXBhc3N3b3JkMTIK",
"email": "email#company.com"
}
}
}
what you actually want is:
{"https://index.docker.io/v1/": {"auth": "XXXXXXXXXXXXXX", "email": "email#company.com"}}
note 3 things:
1) there is no auths wrapping
2) there is https:// in front of the
URL
3) it's one line
then you base64 encode that and use as data for the .dockercfg name
apiVersion: v1
kind: Secret
metadata:
name: registry
data:
.dockercfg: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
type: kubernetes.io/dockercfg
Note again the .dockercfg line is one line (base64 tends to generate a multi-line string)
Another reason you might see this error is due to using a kubectl version different than the cluster version (e.g. using kubectl 1.9.x against a 1.8.x cluster).
The format of the secret generated by the kubectl create secret docker-registry command has changed between versions.
A 1.8.x cluster expect a secret with the format:
{
"https://registry.gitlab.com":{
"username":"...",
"password":"...",
"email":"...",
"auth":"..."
}
}
But the secret generated by the 1.9.x kubectl has this format:
{
"auths":{
"https://registry.gitlab.com":{
"username":"...",
"password":"...",
"email":"...",
"auth":"..."
}
}
}
So, double check the value of the .dockercfg data of your secret and verify that it matches the format expected by your kubernetes cluster version.
I've been experiencing the same problem. What I did notice is that in the example (https://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod) .dockercfg has the following format:
{
"https://index.docker.io/v1/": {
"auth": "ZmFrZXBhc3N3b3JkMTIK",
"email": "jdoe#example.com"
}
}
While the one generated by docker in my machine looks something like this:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "ZmFrZXBhc3N3b3JkMTIK",
"email": "email#company.com"
}
}
}
By checking at the source code, I found that there is actually a test for this use case (https://github.com/kubernetes/kubernetes/blob/6def707f9c8c6ead44d82ac8293f0115f0e47262/pkg/kubelet/dockertools/docker_test.go#L280)
I confirm you that if you just take and encode "auths", as in the example, it will work for you.
Probably the documentation should be updated. I will raise a ticket on github.
I have built a 4 node kubernetes cluster running multi-container pods all running on CoreOS. The images come from public and private repositories. Right now I have to log into each node and manually pull down the images each time I update them. I would like be able to pull them automatically.
I have tried running docker login on each server and putting the .dockercfg file in /root and /core
I have also done the above with the .docker/config.json
I have added secret to the kube master and added imagePullSecrets:
name: docker.io to the Pod configuration file.
When I create the pod i get the error message Error:
image <user/image>:latest not found
If I log in and run docker pull it will pull the image. I have tried this using docker.io and quay.io.
To add to what #rob said, as of docker 1.7, the use of .dockercfg has been deprecated and they now use a ~/.docker/config.json file. There is support for this type of secret in kube 1.1, but you must create it using different keys/type configuration in the yaml:
First, base64 encode your ~/.docker/config.json:
cat ~/.docker/config.json | base64 -w0
Note that the base64 encoding should appear on a single line so with -w0 we disable the wrapping.
Next, create a yaml file:
my-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: <base-64-encoded-json-here>
type: kubernetes.io/dockerconfigjson
-
$ kubectl create -f my-secret.yaml && kubectl get secrets
NAME TYPE DATA
default-token-olob7 kubernetes.io/service-account-token 2
registrypullsecret kubernetes.io/dockerconfigjson 1
Then, in your pod's yaml you need to reference registrypullsecret or create a replication controller:
apiVersion: v1
kind: Pod
metadata:
name: my-private-pod
spec:
containers:
- name: private
image: yourusername/privateimage:version
imagePullSecrets:
- name: registrypullsecret
If you need to pull an image from a private Docker Hub repository, you can use the following.
Create your secret key
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
secret "myregistrykey" created.
Then add the newly created key to your Kubernetes service account.
Retrieve the current service account
kubectl get serviceaccounts default -o yaml > ./sa.yaml
Edit sa.yaml and add the ImagePullSecret after Secrets
imagePullSecrets:
- name: myregistrykey
Update the service account
kubectl replace serviceaccount default -f ./sa.yaml
I can confirm that imagePullSecrets not working with deployment, but you can
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
kubectl edit serviceaccounts default
Add
imagePullSecrets:
- name: myregistrykey
To the end after Secrets, save and exit.
And its works. Tested with Kubernetes 1.6.7
Kubernetes supports a special type of secret that you can create that will be used to fetch images for your pods. More details here.
For centos7, the docker config file is under /root/.dockercfg
echo $(cat /root/.dockercfg) | base64 -w 0
Copy and paste result to secret YAML based on the old format:
apiVersion: v1
kind: Secret
metadata:
name: docker-secret
type: kubernetes.io/dockercfg
data:
.dockercfg: <YOUR_BASE64_JSON_HERE>
And it worked for me, hope that could also help.
The easiest way to create the secret with the same credentials that your docker configuration is with:
kubectl create secret generic myregistry --from-file=.dockerconfigjson=$HOME/.docker/config.json
This already encodes data in base64.
If you can download the images with docker, then kubernetes should be able to download them too. But it is required to add this to your kubernetes objects:
spec:
template:
spec:
imagePullSecrets:
- name: myregistry
containers:
# ...
Where myregistry is the name given in the previous command.
go the easy way, do not forget to define --type and add it to proper namespace
kubectl create secret generic YOURS-SECRET-NAME \
--from-file=.dockerconfigjson=$HOME/.docker/config.json \
--type=kubernetes.io/dockerconfigjson