I am using ArgoCD for a couple of weeks now and I don't really understand how to define spec.source.path in the application.
I have set the following path:
...
spec:
source:
repoURL: https://github.com/theautomation/home-assistant.git
targetRevision: main
path: deploy/k8s
...
but still argoCD syncs when a commit is committed outside this path in the repo, argoCD should ONLY watch this path for changes right? or does it not work that way?
Full application yaml:
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: home-assistant
namespace: devops
annotations:
notifications.argoproj.io/subscribe.slack: cicd
argocd.argoproj.io/manifest-generate-paths: /deploy
spec:
project: default
source:
repoURL: https://github.com/theautomation/home-assistant.git
targetRevision: main
path: deploy/k8s
directory:
recurse: true
destination:
server: https://kubernetes.default.svc
namespace: home-automation
syncPolicy:
syncOptions:
- CreateNamespace=true
- Validate=true
- PrunePropagationPolicy=foreground
- PruneLast=true
automated:
selfHeal: true
prune: true
retry:
limit: 2
backoff:
duration: 5s
factor: 2
maxDuration: 3m
By default, Argo CD will sync when the commit changes regardless of what files were modified. This can be helpful when something in an App's directory references (via symlink or some other mechanism) something outside its directory.
If you are using webhooks to trigger the syncs, and you know an App isn't influenced by outside files, you can use an annotation to specify which directory (or directories) to watch for changes.
If you are not using webhooks and are relying on the usual reconciliation loop, the sync will still happen regardless of the annotation.
Argo CD will ONLY look for changes that have been applied to your targetRevision (ie main) and changes within your defined path. Assuming you have deploy/k8s with a bunch of manifests in that space, it will automatically look for changes to files within that path. Also, bc you have syncPolicy.automated configured, it will automatically apply changes when it is found (normally ab 3-5 mins bc of git polling, unless you have have webhooks configured)
Related
I am configuring my Jenkins instance using jenkins-helm chart (https://github.com/jenkinsci/helm-charts/blob/main/charts/jenkins/VALUES_SUMMARY.md#jenkins-configuration-as-code-jcasc)
Currently Jenkins config is provided in values.yaml as:
jenkins:
controller:
JCasC:
configScripts:
key1:|-
<a-very-big-yaml-value>
Is there a way to import this 'big-yaml-value' from separate yaml file, as it will enhance maintainability of code for us.
As I don't use the helm-charts,I can't answer authoritatively, but it is supported in the abstract. According to the JCasC Getting Started documentation:
First, start a Jenkins instance with the Configuration as Code plugin installed.
Those running Jenkins as a Docker container (and maybe also pre-installing plugins), do include Configuration as Code plugin.
Second, the plugin looks for the CASC_JENKINS_CONFIG environment variable. The variable points to a comma-separated list of any of the following:
Path to a folder containing a set of config files. For example, /var/jenkins_home/init.CasC.
A full path to a single file. For example, /var/jenkins_home/init.CasC/jenkins.yaml.
A URL pointing to a file served on the web. For example, https://acme.org/jenkins.yaml.
If an element of CASC_JENKINS_CONFIG points to a folder, the plugin will recursively traverse the folder to find file(s) with .yml,.yaml,.YAML,.YML suffix. It will exclude hidden files or files that contain a hidden folder in any part of the full path. It follows symbolic links for both files and directories.
So, yes, you can have multiple yml files. I have over 20 (for 120 plugins). They are broken down by capability (eg: global, agents, tools, credentials , including 2 for RBAC (1 for roles, for users, etc.)), plus some plugin specific yml files. Some are also reused across instances while others are specific.
You should be aware of Merge Strategy in the event of conflicts:
ErrorOnConflictMergeStrategy (default)
The strategy name is errorOnConflict.
Throws an exception if there's a conflict in multiple YAML files.
OverrideMergeStrategy
The strategy name is override
Override the config files according to the loading order.
Also be aware when updating an existing instance, certain plugin configurations may replace configurations, while others may augment an existing configuration, regardless of one yaml or many. And of course, not 100% of options are JCasC-able yet, so some init.groovy is also required. YMMV.
You may also wish to review: JCasC Handling Secrets.
The setup below worked for me. Will put the relevant parts.
Directory layout for the helm chart:
jenkins/
├── conf/
│ ├── shared-library.yaml
│ └── big-yaml.yaml
├── templates/
│ └── jenkins-custom-casc-config.yaml
├── values.yaml
└── Chart.yaml
In the values.yaml, we override the CASC_JENKINS_CONFIG so it takes into account an additional path for config files on top of the default one.
controller:
containerEnv:
- name: CASC_JENKINS_CONFIG
value: "/var/jenkins_home/casc_configs,/var/jenkins_home/custom-casc_configs"
persistence:
volumes:
- name: jenkins-custom-casc-config
configMap:
name: jenkins-custom-casc-config
mounts:
- mountPath: /var/jenkins_home/custom-casc_configs
name: jenkins-custom-casc-config
ConfigMap jenkins-custom-casc-config.yaml that loads all files present in the conf/ folder
apiVersion: v1
kind: ConfigMap
metadata:
name: jenkins-custom-casc-config
data:
{{- (.Files.Glob "conf/*").AsConfig | nindent 2 }}
I got it working in when doing subfolder to /var/jenkins_home/casc_configs where I inject all config files. Otherwise HiroCereal´s answer works.
I did it following the idea of HiroCereal, but it keeps showing in the UI
Configuration loaded from :
* /var/jenkins_home/casc_configs
and the folder casc_configs has nothing.
I'm missing something? i'm using helm charts.
I'm now working on sapui5 project managed by ui5.yaml & #ui5/cli. I don't know how to use something like .env to control some variables' value which can be switched between dev env and pro env.
And i have found a workaround, which creating a env.js and include it into controller file. So i can change some value by editing the env.js file. And now i want to make it more convenient.
I use #ui5/cli ui5 run build to build the project. And there is a configuration for custom tasks. And there is a task called webide-extension-task-copyFile which used to copy the /xs-app.json.
And now i create three env files which named as env-dev.js env-pro.js env.js. And i want to copy the env-pro into the env.js before it is built. And copy the env-dev into env.js when i run the ui5 run serve.
I have the config in ui5.yaml like this
specVersion: '2.4'
metadata:
name: ui
type: application
framework:
name: SAPUI5
....
customTasks:
....
- name: webide-extension-task-copyFile
afterTask: webide-extension-task-resources
configuration:
srcFile: '/webapp/env/env-pro.js'
destFile: '/webapp/env/env.js'
I'm sure the task has been triggered, but the file content hasn't been changed. So could someone can help me, or has another idea or success example about using the env in ui5 project.
Thank you :)
I'm trying to automate the deployment of a Ruby on Rails app to App Engine using Cloud Build.
My app.yaml looked like this,
runtime: ruby
env: flex
entrypoint: bundle exec rails server
But I'm getting this error,
Step #1: ERROR: (gcloud.app.deploy) There is a cloudbuild.yaml in the current directory, and the runtime field in /workspace/app.yaml is currently set to [runtime: ruby]. To use your cloudbuild.yaml to build a custom runtime, set the runtime field to [runtime: custom]. To continue using the [ruby] runtime, please remove the cloudbuild.yaml from this directory.
Then I tried to change the runtime to custom and add a Dockerfile as custom runtime needs a Dockerfile.
But now I'm getting an error saying,
ERROR: (gcloud.app.deploy) A custom runtime must have exactly one of [Dockerfile] and [cloudbuild.yaml] in the source directory; [/home/milindu/Projects/ElePath-Ruby] contains both
Then I removed Dockerfile also. But now getting into this weird situation. You can see the 'Step #1:' is growing into several like stuck in a recursion.
Cloudbuild.yaml should work with App Engine Flexible without the need to use a custom runtime. As detailed in the first error message you received, you cannot have an app.yaml and the cloudbuild.yaml in the same directory if you are deploying in a non-custom runtime, to remedy the situation, follow these steps:
Move your app.yaml and other ruby files into a subdirectory (use your
original app.yaml, no need to use custom runtime)
Under your cloudbuild.yaml steps, modify the argument for app deploy
by adding a third one specifying your app.yaml's path.
Below is an example:
==================FROM:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy']
timeout: '1600s'
===================TO:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', '[SUBDIRECTORY/app.yaml]']
timeout: '1600s'
I have a Google App Engine Flex project, which contains the following files:
app.yaml - to define the App Engine Flex environment
Dockerfile - based on a Google App Engine container with some additions
cloudbuild.yaml
cloudbuild.yaml content:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '--tag=gcr.io/$PROJECT_ID/<projectname>', '.']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy']
timeout: '1600s'
images: ['gcr.io/$PROJECT_ID/<projectname>']
This is based on the docs at:
https://cloud.google.com/cloud-build/docs/configuring-builds/build-test-deploy-artifacts#deploying_artifacts
I'm getting the following error on the app deploy command:
A custom runtime must have exactly one of [Dockerfile] and [cloudbuild.yaml] in the source directory
Without cloudbuild.yaml it doesn't know to try and deploy the app, without the Dockerfile it doesn't know what is in it, so how do I specify the same workflow with only one of these?
I ran into the same issue working on a Django project on a flex app engine with a custom docker file. I moved all project files except couldbuild.yaml into a subfolder and in the cloudbuild.yaml specified the subfolder like so
steps:
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy", "<subfolder>/app.yaml"]
timeout: "1600s"
That worked for me.
(see also does appengine cloudbuild.yaml requires a custom runtime?)
My approach was the other way around. Put the cloudbuild configuration in a subfolder and use the --config="<folder>
I want to create projects on a particular node on openshift using the rest api.
I call the openshift api /oapi/v1/projectrequests to create projects.
And in the body I am sending,
body:
{
apiVersion: 'v1',
kind: 'ProjectRequest',
metadata: {
name: projectName,
annotations: {
"openshift.io/node-selector": "servicelevel=botifyproject",
}
},
This api runs successfully, project gets created but when I run the command,
oc describr project <project-name> it doesn't show me anything that says it is inside the botifyproject node selector.
Is this the wrong way of doing this?