I'm thinking about setting up a GitOps CI/CD pipeline in my cluster with Jenkins and ArgoCD.
For a start I want to have a repository for my CI environment with some values files for the Helm charts of my applications.
One thing that I cannot really figure out is, how I can automatically edit the Helm values files without changing the whole structure of the files.
The Jenkins pipeline methods for reading and writing yaml files will fully recreate it and in the process re-format the whole file.
yq does not (seem to) re-order the keys, but it removes empty lines and comments, for example.
Only other thing that comes to my mind is using sed. But that feels kind-of wrong. And it might easily break. Like when I add a second key in another group with the same name. Or add a or remove keys.
Here's an example, to make it a bit more clear:
I have two repositories, one for my application, one for my CI, NI, ... configuration.
My application repo is not that important. Just some application. My config repository looks somewhat like this:
.
...
├── ci
│ ├── app1
│ │ ├── Chart.yaml
│ │ ├── values.yaml
│ ├── app2
...
├── staging
│ ├── app1
│ │ ├── Chart.yaml
│ │ ├── values.yaml
│ ├── app2
...
The values.yaml file of app1 in the ci folder might look like this:
app1:
image:
tag: 1.33.7
ingress:
enabled: true
hosts:
- app1.my-internal-ci.company.com
...
The actual helm charts are somewhere else and here is only the configuration for the the applications on the environments.
ArgoCD watches this repo and takes care that the desired state (as specified by the state of the repo) and the actual state in the cluster match (see https://argoproj.github.io/cd/ for more on the tool).
As a last step my CI pipeline for app1, that builds a docker image from the code in my application repo, should update the values.yaml file in the ci folder in the configuration repo with the new version 1.33.8 of the application that was just build and push it as a new commit to the main branch.
Besides that, there are other values configured, like the ingress for example, that I update manually in the repo, if needed.
Since the files are updated automatically by the CI build pipeline and manually by a developer / DevOps engineer, I would like to be able to keep them easily readable by a human (order of the keys, newlines, comments, ...)
Is there any other way of achieving this?
Thanks in advance!
After some further digging I think the best option I have right now is yq. In newer versions (> 3.0.0; https://github.com/mikefarah/yq/issues/19) it should not remove any comments anymore. And with the newline-thing I can live, I think.
So if you have the files as mentioned above you could update your image tag with:
yq -i '.app1.image.tag = "1.33.8"' ci/app1/values.yaml
Worth mentioning: If you're using kustomize instead of Helm, there is a built-in for this: kustomize edit set image my-app=myregistry.io/my-platform/my-app:1.2.3.
Thanks for your ideas and suggestions!
You can straightforwardly use the helm install -f option here. There are two important details about it:
The helm install -f values are used in addition to the values.yaml in the chart. You do not need to repeat the chart's values in your local settings.
helm install -f takes any valid YAML file; but every valid JSON file is a valid YAML file, so if your CI tool can write out JSON, it can create a file that can be passed to helm install -f.
Indeed, the precise format of the YAML file doesn't really matter, so long as the right keys and sequences and mappings are there with the right structure.
In a Jenkins context, you can use the standard writeJSON function to produce a local values file:
// Only need to provide the specific values we need to set;
// the chart's values.yaml will be used for anything we don't
// directly provide here
helmValues = ['environment': 'production',
'tag': GIT_COMMIT]
writeJSON file: 'values.deploy.yaml', json: helmValues
sh 'helm upgrade --install -n myproject myproject ./charts/myproject -f values.deploy.yaml'
Related
I have a directory structure that looks like this:
some-root/
└── my-stuff/
├── BUILD
├── foo/
│ └── BUILD
├── bar/
│ └── BUILD
└── baz/
└── BUILD
I'd like to have a target like //some-root/my-stuff:update which runs all of //some-root/my-stuff/foo:update, //some-root/my-stuff/bar:update, //some-root/my-stuff/baz:update.
I can do this by listing each target as a dependency. However, if I have many of these and I want to be able to add more it becomes a pain (it's easy to add a bunch of subdirectories and miss adding one to the parent BUILD file).
Is there a way to do a wildcard labels or otherwise discover labels from file paths? I'm able to do bazel test //some-root/my-stuff/... to run all tests under a path, but I can't seem to use that pattern inside of a BUILD file and what I'd want is more like bazel run //some-root/my-stuff/...:update which doesn't work either.
You can get all labels with the name update from the command line:
bazel query "attr(name, '^update$', //...)"
and take the output of query and manually edit your dependencies.
But unfortunately you can not put this into a genquery rule (which would generate the list of targets to depend on), because
queries containing wildcard target specifications (e.g. //pkg:* or //pkg:all) are not allowed
I have the following structure:
/Jenkinsfile/script2.groovy
/Jenkinsfile/pipeline2.yaml
script1.groovy
pipeline1.yaml
There's a reference in script1 to the pipeline using:
yamlFile "pipeline1.yml"
or
yamlFile "./Jenkinsfiles/pipeline2.yaml"
And works fine. I'm trying to use the same pipeline file on script2 but can't make it work.
Here's the relevant part of the script:
pipeline {
agent {
kubernetes {
cloud "xxxx"
yamlFile "pipeline.yml"
}
}
Any idea?
Note: pipeline1 and pieline2 are the same files just showing different locations.
Given the directory structure you mentioned:
.
├── Jenkinsfile
│ ├── pipeline2.yaml
│ └── script2.groovy
├── pipeline1.yaml
└── script1.groovy
The following files can be read from within their parent directory as follows:
For script1 ran from ./
groovy ./script1.groovy is able to read both ./pipeline1.yaml and ./Jenkinsfile/pipeline2.yaml
For Script2 ran from ./
groovy ./Jenkinsfile/script2.groovy is able to read ./pipeline1.yaml, since its in the same directory the file ./Jenkinsfile/script2.groovy is being run from i.e. ./
groovy ./Jenkinsfile/script2.groovy is able to read ./Jenkinfile/pipeline2.yaml also because the path is relative.
I think you could possibly simplify this by just having the files reside in one directory. And also using the syntax readYaml(file: './nameOfFile.yaml') readyaml section.
.
├── pipeline1.yaml
├── script1.groovy
├── pipeline2.yaml
└── script2.groovy
Our monolith repo looks something like this:
├── bin
└── ...
└── source
├── foo
├── BUILD
└── ...
└── bar
├── BUILD
└── ...
└── ...
We have thousands of BUILD files producing thousands of targets into the bin directory.
Now that we have all these compiled binaries in the bin directory, it would be nice to have a method for deriving which BUILD file put it there.
I don't want to hand-write a tool that recursively parses all of the BUILD file's build rules and prints the answer. Is this something that can be achieved with bazel query?
Most of our build rules are pretty typical:
cc_binary(
name="foo",
...
)
py_binary(
name="bar.py",
...
)
The end goal would be to run something like:
where-source ./bin/foo
./source/foo/BUILD created ./bin/foo
If you trim off the bin/ (I assume you're using --symlink_prefix?), you can use bazel query:
# Where did bin/foo/bar/baz get defined?
$ bazel query foo/bar/baz
//foo/bar:baz
Or, to get detailed info about the location:
$ bazel query --output=location foo/bar/baz
/home/kchodorow/gitroot/proj/source/foo/bar/BUILD:3:1: go_binary rule //foo/bar:baz
Or, to get the definition of the target itself:
$ bazel query --output=build foo/bar/baz
go_binary(
name = "baz",
srcs = ["//foo/bar/baz.go"],
)
See What build rule contains file src/main/java/com/example/cache/LRUCache.java as a source? and the rest of that page for more examples of bazel query.
Been thinking on this for ages, really interested in any suggestions.
A simple Unity game project looks like this, with a Git repo at Root.
Root
├─Assets
│ └─Game files
└─Project files
As I develop numerous plugins over time, the resulting structure will look like this.
Root
├─Assets
│ ├─Plugins
│ │ ├─Plugin_A
│ │ ├─Plugin_B
│ │ ├─Plugin_C
│ │ └─Plugin_D
│ └─Game files
├─iOS Plugin projects
│ └─Plugin_C project
├─Anroid Plugin projects
│ └─Plugin_D project
└─Project files
Now I really want to have the plugins their own versioning (so I can keep developing them "from" any game project), also preserve relative project locations for each.
The point is: To have multiple (!) subfolders in the same (!) subrepository. Like having Assets/Plugins/Plugin_D and Anroid Plugin projects/Plugin_D project in a single (!) subrepository. And do the rest with Plugin_C, etc.
Would be great to have a repository of each plugin at root (preserving their subfolder location).
Root
└─Assets
└─Plugins
└─Plugin_A
But the point is to have overlapping folders in the same (!) plugin repository. Like have a repository, containing native plugins (Assets/Plugins/Plugin_C) and (!) containing their respective overlapping plugin projects (iOS Plugin projects/Plugin_C project). Like this for iOS:
Root
├─Assets
│ └─Plugins
│ └─Plugin_C
└─iOS Plugin projects
└─Plugin_C project
And for Android:
Root
├─Assets
│ └─Plugins
│ └─Plugin_D
└─Anroid Plugin projects
└─Plugin_D project
I tried to make them all submodules of the project root (with ignoring everything but the plugin folders), but I could not create multiply submodules into a single Root folder.
As a bonus, it would be great to have all the plugins in a single repository, so I could bootstrap any project easily, then add / remove modules selectively.
What I have so far is to make each folder a submodule locally, but this way a project setup process is really tedious / error prone, also I cannot version the native plugin projects with their respective managed counterpart.
Really interested in any suggestions.
Yes, you can manage plugin A to D in separate repositories. But if these four plugins are related you’d better manage them in a repo for four branches.
Then you can use git subtree to add plugins for the subfolders which you want to apply for your projects.
Such as If you want to add Plugin_C in Assets/Plugins, you can use:
git subtree add --prefix=Assets/Plugins/Plugin_C <URL for pluginC repo> master
If you want add plugin_D to Android/, you can use:
git subtree add --prefix=Android/Plugin_D <URL for pluginD repo> master
I want to trigger a Jenkins build with Gerrit Trigger only when a change to a file in a specific sub-directory is made. For example...
├── subdir1
│ ├── trigger
│ ├── also-trigger
├── subdir2
│ ├── do-not-trigger
If any change to a file in subdir1 is made I want to trigger a build.
Check Gerrit event in your Jenkins project configuration. In Gerrit settings Add File path with path subdir1 (of course you should set project and branches patterns). Then any changes in subdir1 will activate Jenkins, and changes from subdir2 will be ignore by Jenkins.
There's an unexpected behaviour with Gerrit Trigger plugin when you select "Ref Updated" as the trigger event. It does not care about File Path option. But if you have a UI pull request way of doing things, it means you'll be merging changes through UI. So, you can try using "Change Merged" on "Trigger on" section.
Having a look at the parameters of the job, when a change-merged is triggered, there is information about the changeset that triggers the build. On the contrary, when you use ref-updated, there's no information about the changeset, so it is not possible to check for file inclusion. https://bugs.chromium.org/p/gerrit/issues/detail?id=2483