How to version control overlapping folder structures? - ios

Been thinking on this for ages, really interested in any suggestions.
A simple Unity game project looks like this, with a Git repo at Root.
Root
├─Assets
│ └─Game files
└─Project files
As I develop numerous plugins over time, the resulting structure will look like this.
Root
├─Assets
│ ├─Plugins
│ │ ├─Plugin_A
│ │ ├─Plugin_B
│ │ ├─Plugin_C
│ │ └─Plugin_D
│ └─Game files
├─iOS Plugin projects
│ └─Plugin_C project
├─Anroid Plugin projects
│ └─Plugin_D project
└─Project files
Now I really want to have the plugins their own versioning (so I can keep developing them "from" any game project), also preserve relative project locations for each.
The point is: To have multiple (!) subfolders in the same (!) subrepository. Like having Assets/Plugins/Plugin_D and Anroid Plugin projects/Plugin_D project in a single (!) subrepository. And do the rest with Plugin_C, etc.
Would be great to have a repository of each plugin at root (preserving their subfolder location).
Root
└─Assets
└─Plugins
└─Plugin_A
But the point is to have overlapping folders in the same (!) plugin repository. Like have a repository, containing native plugins (Assets/Plugins/Plugin_C) and (!) containing their respective overlapping plugin projects (iOS Plugin projects/Plugin_C project). Like this for iOS:
Root
├─Assets
│ └─Plugins
│ └─Plugin_C
└─iOS Plugin projects
└─Plugin_C project
And for Android:
Root
├─Assets
│ └─Plugins
│ └─Plugin_D
└─Anroid Plugin projects
└─Plugin_D project
I tried to make them all submodules of the project root (with ignoring everything but the plugin folders), but I could not create multiply submodules into a single Root folder.
As a bonus, it would be great to have all the plugins in a single repository, so I could bootstrap any project easily, then add / remove modules selectively.
What I have so far is to make each folder a submodule locally, but this way a project setup process is really tedious / error prone, also I cannot version the native plugin projects with their respective managed counterpart.
Really interested in any suggestions.

Yes, you can manage plugin A to D in separate repositories. But if these four plugins are related you’d better manage them in a repo for four branches.
Then you can use git subtree to add plugins for the subfolders which you want to apply for your projects.
Such as If you want to add Plugin_C in Assets/Plugins, you can use:
git subtree add --prefix=Assets/Plugins/Plugin_C <URL for pluginC repo> master
If you want add plugin_D to Android/, you can use:
git subtree add --prefix=Android/Plugin_D <URL for pluginD repo> master

Related

Change value in yaml file without changing the files structure

I'm thinking about setting up a GitOps CI/CD pipeline in my cluster with Jenkins and ArgoCD.
For a start I want to have a repository for my CI environment with some values files for the Helm charts of my applications.
One thing that I cannot really figure out is, how I can automatically edit the Helm values files without changing the whole structure of the files.
The Jenkins pipeline methods for reading and writing yaml files will fully recreate it and in the process re-format the whole file.
yq does not (seem to) re-order the keys, but it removes empty lines and comments, for example.
Only other thing that comes to my mind is using sed. But that feels kind-of wrong. And it might easily break. Like when I add a second key in another group with the same name. Or add a or remove keys.
Here's an example, to make it a bit more clear:
I have two repositories, one for my application, one for my CI, NI, ... configuration.
My application repo is not that important. Just some application. My config repository looks somewhat like this:
.
...
├── ci
│ ├── app1
│ │   ├── Chart.yaml
│ │   ├── values.yaml
│ ├── app2
...
├── staging
│ ├── app1
│ │ ├── Chart.yaml
│ │ ├── values.yaml
│ ├── app2
...
The values.yaml file of app1 in the ci folder might look like this:
app1:
image:
tag: 1.33.7
ingress:
enabled: true
hosts:
- app1.my-internal-ci.company.com
...
The actual helm charts are somewhere else and here is only the configuration for the the applications on the environments.
ArgoCD watches this repo and takes care that the desired state (as specified by the state of the repo) and the actual state in the cluster match (see https://argoproj.github.io/cd/ for more on the tool).
As a last step my CI pipeline for app1, that builds a docker image from the code in my application repo, should update the values.yaml file in the ci folder in the configuration repo with the new version 1.33.8 of the application that was just build and push it as a new commit to the main branch.
Besides that, there are other values configured, like the ingress for example, that I update manually in the repo, if needed.
Since the files are updated automatically by the CI build pipeline and manually by a developer / DevOps engineer, I would like to be able to keep them easily readable by a human (order of the keys, newlines, comments, ...)
Is there any other way of achieving this?
Thanks in advance!
After some further digging I think the best option I have right now is yq. In newer versions (> 3.0.0; https://github.com/mikefarah/yq/issues/19) it should not remove any comments anymore. And with the newline-thing I can live, I think.
So if you have the files as mentioned above you could update your image tag with:
yq -i '.app1.image.tag = "1.33.8"' ci/app1/values.yaml
Worth mentioning: If you're using kustomize instead of Helm, there is a built-in for this: kustomize edit set image my-app=myregistry.io/my-platform/my-app:1.2.3.
Thanks for your ideas and suggestions!
You can straightforwardly use the helm install -f option here. There are two important details about it:
The helm install -f values are used in addition to the values.yaml in the chart. You do not need to repeat the chart's values in your local settings.
helm install -f takes any valid YAML file; but every valid JSON file is a valid YAML file, so if your CI tool can write out JSON, it can create a file that can be passed to helm install -f.
Indeed, the precise format of the YAML file doesn't really matter, so long as the right keys and sequences and mappings are there with the right structure.
In a Jenkins context, you can use the standard writeJSON function to produce a local values file:
// Only need to provide the specific values we need to set;
// the chart's values.yaml will be used for anything we don't
// directly provide here
helmValues = ['environment': 'production',
'tag': GIT_COMMIT]
writeJSON file: 'values.deploy.yaml', json: helmValues
sh 'helm upgrade --install -n myproject myproject ./charts/myproject -f values.deploy.yaml'

Use `bazel query` inside a build file

I am using Bazel with Golang, but the question is no Go-specific. I have a common go directory structure:
cmd/
├── mycommand/
│ ├── BUILD.bazel
│ ├── main.go
│ └── somefolder
│ └── other.go
├── othercommand/
│ ├── BUILD.bazel
│ └── main.go
pkg/
└── mypackage/
├── BUILD.bazel
└── init.go
BUILD.bazel
WORKSPACE
... and I'd like to reference targets under the cmd folder. I have a bazel query that will give me the list of those targets:
bazel query 'kind("go_binary", deps(//cmd/...))'
//cmd/mycommand:mycommand
//cmd/othercommand:othercommand
The question: How can I include this query in a BUILD.bazel file, something like the following:
pkg_tar(
name = "release",
srcs = kind("go_binary", deps(//cmd/...)),
mode = "0644",
)
...which gives
ERROR: /some/path/BUILD.bazel:10:12: name 'kind' is not defined
ERROR: /some/path/BUILD.bazel:10:30: name 'deps' is not defined
Build targets need to be statically referenced in BUILD files, so embedding queries as inputs to rule attributes does not work.
However, there are a couple of ways to dynamically generate targets to be used statically in the BUILD files:
1) Run a tool that generates a BUILD file before running Bazel. rules_go's Gazelle is a good example.
2) Write a repository rule that invokes non-hermetic tools to dynamically generate targets that your BUILD files can depend on.
Note that you may come across the genquery rule, which does let you perform a query on targets, but the rule outputs to a file during Bazel's execution phase, and not a Starlark list that can ingested into other rules' attributes during the analysis phase, which happens before the execution phase.

Can I depend on all the Bazel targets matching a pattern, without listing them individually?

I have a directory structure that looks like this:
some-root/
└── my-stuff/
├── BUILD
├── foo/
│ └── BUILD
├── bar/
│ └── BUILD
└── baz/
└── BUILD
I'd like to have a target like //some-root/my-stuff:update which runs all of //some-root/my-stuff/foo:update, //some-root/my-stuff/bar:update, //some-root/my-stuff/baz:update.
I can do this by listing each target as a dependency. However, if I have many of these and I want to be able to add more it becomes a pain (it's easy to add a bunch of subdirectories and miss adding one to the parent BUILD file).
Is there a way to do a wildcard labels or otherwise discover labels from file paths? I'm able to do bazel test //some-root/my-stuff/... to run all tests under a path, but I can't seem to use that pattern inside of a BUILD file and what I'd want is more like bazel run //some-root/my-stuff/...:update which doesn't work either.
You can get all labels with the name update from the command line:
bazel query "attr(name, '^update$', //...)"
and take the output of query and manually edit your dependencies.
But unfortunately you can not put this into a genquery rule (which would generate the list of targets to depend on), because
queries containing wildcard target specifications (e.g. //pkg:* or //pkg:all) are not allowed

How to trigger a Jenkins build in Gerrit Trigger on change to project subdirectory

I want to trigger a Jenkins build with Gerrit Trigger only when a change to a file in a specific sub-directory is made. For example...
├── subdir1
│   ├── trigger
│ ├── also-trigger
├── subdir2
│   ├── do-not-trigger
If any change to a file in subdir1 is made I want to trigger a build.
Check Gerrit event in your Jenkins project configuration. In Gerrit settings Add File path with path subdir1 (of course you should set project and branches patterns). Then any changes in subdir1 will activate Jenkins, and changes from subdir2 will be ignore by Jenkins.
There's an unexpected behaviour with Gerrit Trigger plugin when you select "Ref Updated" as the trigger event. It does not care about File Path option. But if you have a UI pull request way of doing things, it means you'll be merging changes through UI. So, you can try using "Change Merged" on "Trigger on" section.
Having a look at the parameters of the job, when a change-merged is triggered, there is information about the changeset that triggers the build. On the contrary, when you use ref-updated, there's no information about the changeset, so it is not possible to check for file inclusion. https://bugs.chromium.org/p/gerrit/issues/detail?id=2483

Gradle multi module dependencies with Continuous Integration

Just starting with Android Studio/Gradle/CI, I have an Android Studio project setup with a structure resembling this:
┌ Project
│
├── Module "lib-core" (produces .aar)
│
├── Module "lib-v1" (produces .aar, depends on "core-lib")
│
├── Module "lib-v2" (produces .aar, depends on "core-lib")
│
├── ... (potentially mode libs)
│
└── Module "test-app" (produces .apk, depends on "lib-v1" and "lib-v2")
"lib-core" is used directly only from inside this project, while "lib-v1" and "lib-v2" can also be used from other projects ("test-app" is a sample project to show the usage of "lib-s") and need to be on our Maven repo as aar-s.
This project is also built with Jenkins and the artifacts go to a local Maven repo (Sonatype Nexus). This is achieved through "assembleRelease uploadArchives" tasks. As a part of the CI, the projects need to be versioned accordingly. Ideally, all lib modules (actually their artifacts) should be kept at the same version.
Now to address the issue: Let's say I've bumped the version to 1.4.2. Now, when Jenkins tries to evaluate the build scripts, it complains that "lib-core" with version 1.4.2 does not yet exist, which is true. This is the case where "lib-v1" would have the dependency to "lib-core" via 'compile "org.example:lib-core:1.4.2#aar"'
If, on the other hand, "lib-v1" declares the dependency to "lib-core" via 'compile project(":lib-core")', the produced pom.xml on the Nexus-repo (for "lib-v1") doesn't include the correct reference to "lib-core"... Unfortunately I currently don't have access to a sample but if I remember correctly the groupId is something on the line of "unresolved", "unspecified" or similar. So in that case "lib-v1" can't be used further down the pipeline (using { transitive = true } to resolve "lib-core")
Is there a way to setup the build script so the "lib-core" would be built and its artifact uploaded before evaluating the other modules, but without splitting this into multiple projects? Or some other way of setup which would enable building of this project both on the developer machine and the CI server?
It somehow seems I'm over-complicating things and this could be accomplished in some other (simple) way, I'm just not seeing it currently.
EDIT
When I declare the dependency with 'compile project(":lib-core")' I get the following in pom.xml for "lib-v1" on the Nexus repo:
<dependency>
<groupId>Library</groupId>
<artifactId>lib-core</artifactId>
<version>unspecified</version>
<scope>compile</scope>
</dependency>
artifactId is correct, but groupId is the module name, while the version is "unspecified" - so the project using "lib-v1" can't resolve the "lib-core" dependency.
I do not think it's possible to do "built and its artifact uploaded before evaluating the other modules," due to gradle has to complete "Configuration" phrase before "Execution" phrase During "Configuration" phrase, gradle will try to evaluate all dependencies in all modules.
However, "declares the dependency to "lib-core" via 'compile project(":lib-core")'" should works for you and we have similar set-up as yours which works ok. Maybe somethings wrong with module gradle/version in your build.gradle. If you can provide more details, e.g. pom.xml and build.gradle, it will be more clear.
Could you try following in your build.gradle at Project level.
allprojects {
group = 'Library'
version = '1.4.2.'
}

Resources