ArgoCD External helm values issues from gitlab url - devops

I am currently having trouble deploying my applications with helm on argocd.
I use an Application ressource and I will go to an ApplicationSet next, that I would copy to you in which I must call on values.yml from another repository in my gitlab.
I try to put the link of the repo directly but it does not work.
I haven't found any other solutions to use values ​​files from another gitlab repository.
Can you help me ?
Thanks in advance !
My code :
My Application ressource file :
`
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: react-docker-app
namespace: argocd
spec:
syncPolicy:
automated:
selfHeal: true
project: default
destination:
server: https://kubernetes.default.svc
namespace: argocd
source:
repoURL: https://gitlab.com/api/v4/projects/40489526/packages/helm/stable
targetRevision: 0.8.0
chart: react-chart
helm:
valueFiles:
- https://gitlab.com/maxdev42-gitops-projects/reactdockerapp2/-/blob/master/deployment/valtues.yaml
`
My values.yml from an another repository :
`
image:
repository: registry.gitlab.com/maxdev42/react-docker-app
tag: "appv8"
`
I'm trying to use value files from other gitlab repositories to deploy my application on argocd with Helm.

The word you are looking for is OTS (off-the-shelf).
Here you have an example: https://github.com/argoproj/argocd-example-apps/tree/master/helm-dependency
Shortly you have to define a new Chart in your repo where you have custom values.yaml referring to a Chart from https://gitlab.com/api/v4/projects/40489526/packages/helm/stable as dependency.
values.yaml should be changed to:
react-chart:
image:
repository: registry.gitlab.com/maxdev42/react-docker-app
tag: "appv7"

Related

Getting the Namespace in an openshift buildconfig template

I've been searching and trying to see how I can get the namespace name of the openshift project to append to the build config so I can pass it in as a parameter when generating an image to our jfrog artifactory. Below is a snippet of the buildconfig of what I have
apiVersion: v1
metadata:
name: "${COMPONENT_NAME}"
env:
- name: APP_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
And withn the same config template I am trying to reference it or use within a url
output:
to:
kind: DockerImage
name:
artifactory.example.com/${JFROG_REPO}/$(APP_POD_NAMESPACE)/${COMPONENT_NAME}:${COMPONENT_TAG}
I have changed the brackets back and forth from ( to {.
This is the error I'm receiving when deploying through my pipeline
spec.output.to.name: Invalid value: "artifactory.example.com/jfrog_repo_name/$(APP_POD_NAMESPACE)/microservice_name:1.0.0": name is not a valid Docker pull specification: invalid reference format
Obviously this tells me that it's not able to obtain the environment variable. This template is also being used for Openshift v4.8

How can I set JNDI configuration in a Docker overrides.yaml file?

If I have a java configuration bean, saying:
package com.mycompany.app.configuration;
// whatever imports
public class MyConfiguration {
private String someConfigurationValue = "defaultValue";
// getters and setters etc
}
If I set that using jetty for local testing I can do so using a config.xml file in the following form:
<myConfiguration class="com.mycompany.app.configuration.MyConfiguration" context="SomeContextAttribute">
<someConfigurationValue>http://localhost:8080</someConfigurationValue>
</myConfiguration>
However in the deployed environment in which I need to test, I will need to use docker to set these configuration values, we use jboss.
Is there a way to directly set these JNDI values? I've been looking for examples for quite a while but cannot find any. This would be in the context of a yaml file which is used to configure a k8 cluster. Apologies for the psuedocode, I would post the real code but it's all proprietary so I can't.
What I have so far for the overrides.yaml snippet is of the form:
env:
'MyConfig.SomeContextAttribute':
class_name: 'com.mycompany.app.configuration.MyConfiguration'
someConfigurationValue: 'http://localhost:8080'
However this is a complete guess.
You can achieve it by using ConfigMap.
A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
First what you need to create ConfigMap from your file using command as below:
kubectl create configmap <map-name> <data-source>
Where <map-name> is the name you want to assign to the ConfigMap and <data-source> is the directory, file, or literal value to draw the data from. You can read more about it here.
Here is an example:
Download the sample file:
wget https://kubernetes.io/examples/configmap/game.properties
You can check what is inside this file using cat command:
cat game.properties
You will see that there are some variables in this file:
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30r
Create the ConfigMap from this file:
kubectl create configmap game-config --from-file=game.properties
You should see output that ConfigMap has been created:
configmap/game-config created
You can display details of the ConfigMap using command below:
kubectl describe configmaps game-config
You will see output as below:
Name: game-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
game.properties:
----
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
You can also see how yaml of this ConfigMap will look using:
kubectl get configmaps game-config -o yaml
The output will be similar:
apiVersion: v1
data:
game.properties: |-
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
kind: ConfigMap
metadata:
creationTimestamp: "2022-01-28T12:33:33Z"
name: game-config
namespace: default
resourceVersion: "2692045"
uid: 5eed4d9d-0d38-42af-bde2-5c7079a48518
Next goal is connecting ConfigMap to Pod. It could be added in yaml file of Podconfiguration.
As you can see under containersthere is envFrom section. As name is a name of ConfigMapwhich I created in previous step. You can read about envFrom here
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: nginx
envFrom:
- configMapRef:
name: game-config
Create a Pod from yaml file using:
kubectl apply -f <name-of-your-file>.yaml
Final step is checking environment variables in this Pod using below command:
kubectl exec -it test-pod -- env
As you can see below, there are environment variables from simple file which I downloaded in the first step:
game.properties=enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
The way to do this is as follows:
If you are attempting to set a value that looks like this in terms of fully qualified name:
com.mycompany.app.configuration.MyConfiguration#someConfigurationValue
Then that will look like the following in a yaml file:
com_mycompany_app_configuration_MyConfiguration_someConfigurationValue: 'blahValue'
It really is that simple. It does need to be set as an environment variable in the yaml, but I'm not sure whether it needs to be under env: or if that's specific to us.
I don't think there's a way of setting something in YAML that in XML would be an attribute, however. I've tried figuring that part out, but I haven't been able to.

What is the proper image url to be entered in the yaml file for KNATIVE deployment

I am trying to complete the KNative tutorial for deploying this tutorial : https://knative.dev/docs/serving/samples/hello-world/helloworld-ruby/
I have a url upon completion however, the page is not reachable. I am getting 404 not found.
When I run: kubectl get all , I get the following:
NAME CONFIG NAME K8S SERVICE NAME GENERATION READY REASON ACTUAL REPLICAS DESIRED REPLICAS
revision.serving.knative.dev/helloworld-go-00001 helloworld-go helloworld-go-00001 1 True 0 0
revision.serving.knative.dev/sample-app-00001 sample-app 1 False ContainerMissing
revision.serving.knative.dev/sample-app-00002 sample-app 2 False ContainerMissing
Which leaves me to believe that there is something wrong with the image url specified in my yaml file.
My yaml looks like this:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: sample-app
namespace: default
spec:
template:
spec:
containers:
- image: docker.io/adet4ever/sample-app
env:
- name: TARGET
value: "Ruby Sample v1"
I noticed that the only time a different app works is when I am ableto wget the image url like in the example here:https://docs.openshift.com/container-platform/4.1/serverless/getting-started-knative-services.html
I cannot wget this url and not sure why: docker.io/adet4ever/sample-app
I created a docker hub account and pushed the image. I dont know if I am missing anything else.
Thanks for helping as I have spent 2 days trying to fix this problem.

Openshift: any deployment resulted in Application is not available

Fist time deploying to OpenShift (actually minishift in my Windows 10 Pro). Any sample application I deploied successfully resulted in:
From Web Console I see a weird message "Build #1 is pending" although I saw it was successfully from PowerShell
I found someone fixing similiar issue changing to 0.0.0.0 (enter link description here) but I give a try and it isn't the solution in my case.
Here are the full logs and how I am deploying
PS C:\to_learn\docker-compose-to-minishift\first-try> oc new-app https://github.com/openshift/nodejs-ex warning: Cannot check if git requires authentication.
--> Found image 93de123 (16 months old) in image stream "openshift/nodejs" under tag "10" for "nodejs"
Node.js 10.12.0
---------------
Node.js available as docker container is a base platform for building and running various Node.js applications and frameworks. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
Tags: builder, nodejs, nodejs-10.12.0
* The source repository appears to match: nodejs
* A source build using source code from https://github.com/openshift/nodejs-ex will be created
* The resulting image will be pushed to image stream tag "nodejs-ex:latest"
* Use 'start-build' to trigger a new build
* WARNING: this source repository may require credentials.
Create a secret with your git credentials and use 'set build-secret' to assign it to the build config.
* This image will be deployed in deployment config "nodejs-ex"
* Port 8080/tcp will be load balanced by service "nodejs-ex"
* Other containers can access this service through the hostname "nodejs-ex"
--> Creating resources ...
imagestream.image.openshift.io "nodejs-ex" created
buildconfig.build.openshift.io "nodejs-ex" created
deploymentconfig.apps.openshift.io "nodejs-ex" created
service "nodejs-ex" created
--> Success
Build scheduled, use 'oc logs -f bc/nodejs-ex' to track its progress.
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/nodejs-ex'
Run 'oc status' to view your app.
PS C:\to_learn\docker-compose-to-minishift\first-try> oc get bc/nodejs-ex -o yaml apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: 2020-02-20T20:10:38Z
labels:
app: nodejs-ex
name: nodejs-ex
namespace: samplepipeline
resourceVersion: "1123211"
selfLink: /apis/build.openshift.io/v1/namespaces/samplepipeline/buildconfigs/nodejs-ex
uid: 1003675e-541d-11ea-9577-080027aefe4e
spec:
failedBuildsHistoryLimit: 5
nodeSelector: null
output:
to:
kind: ImageStreamTag
name: nodejs-ex:latest
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
uri: https://github.com/openshift/nodejs-ex
type: Git
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: nodejs:10
namespace: openshift
type: Source
successfulBuildsHistoryLimit: 5
triggers:
- github:
secret: c3FoC0RRfTy_76WEOTNg
type: GitHub
- generic:
secret: vlKqJQ3ZBxfP4HWce_Oz
type: Generic
- type: ConfigChange
- imageChange:
lastTriggeredImageID: 172.30.1.1:5000/openshift/nodejs#sha256:3cc041334eef8d5853078a0190e46a2998a70ad98320db512968f1de0561705e
type: ImageChange
status:
lastVersion: 1

Pulling images from private registry in Kubernetes

I have built a 4 node kubernetes cluster running multi-container pods all running on CoreOS. The images come from public and private repositories. Right now I have to log into each node and manually pull down the images each time I update them. I would like be able to pull them automatically.
I have tried running docker login on each server and putting the .dockercfg file in /root and /core
I have also done the above with the .docker/config.json
I have added secret to the kube master and added imagePullSecrets:
name: docker.io to the Pod configuration file.
When I create the pod i get the error message Error:
image <user/image>:latest not found
If I log in and run docker pull it will pull the image. I have tried this using docker.io and quay.io.
To add to what #rob said, as of docker 1.7, the use of .dockercfg has been deprecated and they now use a ~/.docker/config.json file. There is support for this type of secret in kube 1.1, but you must create it using different keys/type configuration in the yaml:
First, base64 encode your ~/.docker/config.json:
cat ~/.docker/config.json | base64 -w0
Note that the base64 encoding should appear on a single line so with -w0 we disable the wrapping.
Next, create a yaml file:
my-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: <base-64-encoded-json-here>
type: kubernetes.io/dockerconfigjson
-
$ kubectl create -f my-secret.yaml && kubectl get secrets
NAME TYPE DATA
default-token-olob7 kubernetes.io/service-account-token 2
registrypullsecret kubernetes.io/dockerconfigjson 1
Then, in your pod's yaml you need to reference registrypullsecret or create a replication controller:
apiVersion: v1
kind: Pod
metadata:
name: my-private-pod
spec:
containers:
- name: private
image: yourusername/privateimage:version
imagePullSecrets:
- name: registrypullsecret
If you need to pull an image from a private Docker Hub repository, you can use the following.
Create your secret key
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
secret "myregistrykey" created.
Then add the newly created key to your Kubernetes service account.
Retrieve the current service account
kubectl get serviceaccounts default -o yaml > ./sa.yaml
Edit sa.yaml and add the ImagePullSecret after Secrets
imagePullSecrets:
- name: myregistrykey
Update the service account
kubectl replace serviceaccount default -f ./sa.yaml
I can confirm that imagePullSecrets not working with deployment, but you can
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
kubectl edit serviceaccounts default
Add
imagePullSecrets:
- name: myregistrykey
To the end after Secrets, save and exit.
And its works. Tested with Kubernetes 1.6.7
Kubernetes supports a special type of secret that you can create that will be used to fetch images for your pods. More details here.
For centos7, the docker config file is under /root/.dockercfg
echo $(cat /root/.dockercfg) | base64 -w 0
Copy and paste result to secret YAML based on the old format:
apiVersion: v1
kind: Secret
metadata:
name: docker-secret
type: kubernetes.io/dockercfg
data:
.dockercfg: <YOUR_BASE64_JSON_HERE>
And it worked for me, hope that could also help.
The easiest way to create the secret with the same credentials that your docker configuration is with:
kubectl create secret generic myregistry --from-file=.dockerconfigjson=$HOME/.docker/config.json
This already encodes data in base64.
If you can download the images with docker, then kubernetes should be able to download them too. But it is required to add this to your kubernetes objects:
spec:
template:
spec:
imagePullSecrets:
- name: myregistry
containers:
# ...
Where myregistry is the name given in the previous command.
go the easy way, do not forget to define --type and add it to proper namespace
kubectl create secret generic YOURS-SECRET-NAME \
--from-file=.dockerconfigjson=$HOME/.docker/config.json \
--type=kubernetes.io/dockerconfigjson

Resources