Helm not able finding local charts to update the dependency - local

I am installing Prometheus on my vanilla k8s cluster using helm 3. Prometheus comes with kube-state-metrics chart dependency.
My machine is completely locked out from internet so all my development is local.
I have installed chart museum which does have my repos. But when I try to update the dependency, its not able to find it, either from local path to chart.yaml or chart-museum url.
Save error occurred: directory charts/kube-state-metrics not found
Deleting newly downloaded charts, restoring pre-update state
Error: directory charts/kube-state-metrics not found
I have tried most of the solutions, nothing have worked so far.

Resolved this issue. The chart was linking with dependencies but regardless still gave this error.
I did mention the repo in my requirements yaml as file://./path-to-chart, but on dependency update it still prompted that error message and did not make the requirements.lock file.
The Prometheus pod was in crashloopbackoff and I thought the reason was the dependency, but from the logs it was due to permissions on the persistence volume.
Helm can do more to chart the on-premise workflows regardless. Not many software houses have open access to the internet.

Related

During the deployment process, how do you get your existing app data into an application created by a public Helm Chart for a LAMP stacks?

Take bitnami/wordpress or bitnami/drupal for example. There are millions of articles on how to run two lines of code (helm get repo / helm install my-release chart) and have a fully working new version of an app in 30 seconds. But I cannot find ANY information about how to get my existing data into that deployment.
In my development workflow, I use two docker images. One is for the app files and the other is for the database. Locally, it's easy enough to get my data into these images. Using MariaDB's docker instructions, I mount a local directory containing my db.sql file into /docker-entrypoint-initdb.d. The same goes for my files - pull them down into a local directory that's then mounted into the container's /var/www folder. Voila! Instant running app with all existing data.
So how do I do this with a public Helm chart?
Scenario: I get local copies of my db.sql and web files. I make my changes. I want to use bitnami/drupal to install this into a cluster (so a colleague can see it, UAT, etc). So how do I do that? If this is a values.yaml issue, how do I configure that file to point to the database file I want to initialize with? Or, how do I use Helm install with --set to do it?
If getting a new app up and running is as easy as
helm install my-release bitnami/drupal
then shouldn't it be just as easy to run something like
helm install --set mariadb.docker-entrypoint-initdb.d.file=db.sql --set volume.www.initial.data=/local/web/files new-feature-ticket bitnami/drupal
I know that's pseudo code, but that's exactly the type of answer I'm looking for. I want to be able to deploy this as quickly as I do a new app, but initialized with my existing data, and the bare minimum config need to do so whether that's via values.yaml or --set.

gitlab on kubernetes/docker: pipeline failing: Error cleaning up configmap: resource name may not be empty

We run gitlab-ee-12.10.12.0 under docker and use kubernetes to manage the gitlab-runner
All of a sudden a couple of days ago, all my pipelines, in all my projects, stopped working. NOTHING CHANGED except I pushed some code. Yet ALL projects (even those with no repo changes) are failing. I've looked at every certificate I can find anywhere in the system and they're all good so it wasn't a cert expiry. Disk space is at 45% so it's not that. Nobody logged into the server. Nobody touched any admin screens. One code push triggered the pipeline successfully, next one didn't. I've looked at everything. I've updated the docker images for gitlab and gitlab-runner. I've deleted every kubernetes pod I can find in the namespace and let them get relaunched (my go-to for solving k8s problems :-) ).
Every pipeline run in every project now says this:
Running with gitlab-runner 14.3.2 (e0218c92)
on Kubernetes Runner vXpkH225
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab
Using Kubernetes executor with image lxnsok01.wg.dir.telstra.com:9000/broadworks-build:latest ...
Using attach strategy to execute scripts...
Preparing environment
00:00
ERROR: Error cleaning up configmap: resource name may not be empty
ERROR: Job failed (system failure): prepare environment: setting up build pod: error setting ownerReferences: configmaps "runner-vxpkh225-project-47-concurrent-0-scripts9ds4c" is forbidden: User "system:serviceaccount:gitlab:gitlab" cannot update resource "configmaps" in API group "" in the namespace "gitlab". Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information
That URL talks about bash logout scripts containing bad things. But nothing changed. At least we didn't change anything.
I believe the second error implying that the user doesn't have permissions is not correct. It seems to just be saying that the user couldn't do it. The primary error being the previous one about the configmaps clean up. Again, no serviceaccounts, roles, rolebindings, etc have changed in any way.
So I'm trying to work out what may CAUSE that error. What does it MEAN? What resource name is empty? Where can I find out?
I've checked the output from "docker container logs " and it says exactly what's in the error above. No more, no less.
The only thing I can think of is perhaps 14.3.2 of gitlab-runner doesn't like my k8s or the config. Going back and checking, it seems this has changed. Previous working pipelines ran in 14.1.
So two questions then: 1) Any ideas how to fix the problem (eg update some config, clear some crud, whatever) and 2) How to I get gitlab to use a runner other than :latest?
Turns out something DID change. gitlab-runner changed and kubernetes pulled gitlab/gitlab-runner:latest between runs. Seems gitlab-runner 14.3 has a problem with my kubernetes. I went back through my pipelines and the last successful one was using 14.1
So, after a day of working through it, I edited the relevant k8s deployment to redefine the image tag used for gitlab-runner to :v14.1.0 which is the last one that worked for me.
Maybe I'll wait a few weeks and try a later one (now that I know how to easily change that tag) and see if the issue gets fixed. And perhaps go raise an issue on gitlab-runner

Coldfusion Docker is uninstalling modules on build

I'm having issues with creating a useable docker container for a ColdFusion 2021 app. I can create the container, but everytime it is rebuilt I have to reinstall all of the modules (admin, search, etc.). This is an issue because the site that the container will be housed on will be rebuilding the container everyday.
The container is being built with docker-compose. I have tried using the installModule and importModule environmental variables, running the install command from the Dockerfile, building the container and creating a .car file to keep the settings, and disabling the secure mode using the environmental variables.
I have looked at the log, and all of the different methods used to install/import the modules are actually downloading and installing the modules. However, when the container first starts to spin up there's a section where the selected modules are installed (and the modules that are not installed are listed). That section is followed by the message that the coldfusion services are available, then it starts services, security, etc. and uninstalls (and removes) the modules. It then says that no modules are going to be installed because they are not present, and it gives the "services available" message again.
Somehow, it seems that one of the services is uninstalling and removing the module files, and none of the environmental variables (or even the setupscript) are affecting that process. I thought it might be an issue with the secure setup, but even with disabling that the problem persists. My main question is, what could be causing it to be uninstalled?
I was also looking for clarification on a couple of items:
a) all of the documentation I could find said that the .CAR file would be automatically loaded if it was in the /data folder (and in one spot it's referred to the image's /data folder). That would be at the top level with /opt and /app, right? I couldn't find an existing data folder anywhere.
b) Several of the logs and help functions mention a /docs folder, but I can't find it in the file directory. Would anyone happen to know where I can find them? It seems like that would be helpful for solving this.
Thank you in advance for any help you can give!
I don't know if the Adobe images provide a mechanism to automatically install modules every time the container rebuilds, but I recommend you look into the Ortus CommandBox-based images. They have an environment variable for the cfpm packages you want installed and CFConfig which is much more robust than car files.
https://hub.docker.com/r/ortussolutions/commandbox/
FYI, I work for Ortus Solutions.

Cloud Build docker image unable to write files locally - fail to open file... permission denied

Using Service Account credentials, I am successful at running Cloud Build to spin up gsutil, move files from gs into the instance, then copy them back out. All is good.
One of the Cloud Build steps successfully loads a docker image from outside source, it loads fine and reports its own help info successfully. But when run, it fails with the error message:
"fail to open file "..intermediary_work_product_file." permission denied.
For the app I'm running in this step, this error is typically produced when the file cannot be written to its default location. I've set dir = "/workspace" to confirm the default.
So how do I grant read/write permissions to the app running inside a Cloud Build step to write its own intermediary work product to the local folders? The Cloud Build itself is running fine using Service Account credentials. Have tried adding more permissions including with Storage, Cloud Run, Compute Engine, App Engine admin roles. But the same error.
I assume that the credentials used to create the instance are passed to the run time. Have dug deep into the GCP CloudBuild documentation and examples, but found no answers.
There must be something fundamental I'm overlooking.
This problem was resolved by changing the Dockerfile USER as suggested by #PRAJINPRAKASH in this helpful answer https://stackoverflow.com/a/62218160/4882696
Tried to solve this by systematically testing GCP services and role permissions. All Service Account credentials tested were able to create container instances, and run gcloud or gutil fine. However, the custom apps created containers but failed when doing local write even to the default shared /workspace.
When using GCP Cloud Build, local read/write permissions do not "pass through" from the default service account to the runtime instance. The documentation is not clear on this.
I encountered this problem while building my react app with Cloud Build, i wasn't able to install node-sass globally...
So i tried to chown recursively the /usr directory to nobody:nogroup, and it worked. I have no idea if there is another better solution to this, but, the important thing, it fixed my issue.
I had a similar problem; the snippet I was looking for in my cloudbuild manifest was:
- id: perms
name: "gcr.io/cloud-builders/git"
entrypoint: "chmod"
args: ["-v", "-R", "a+rw", "."]
dir: "path/to/some/dir"

Docker Desktop Kubernetes Unable to connect to the server: EOF

Earlier today I had increased my Docker desktop resources, but when ever since it restarted Kubernetes has not been able to complete its startup. Whenever I try to run a kubectl command, I get Unable to connect to the server: EOF in response.
I had thought that it started because I hadn't deleting a helm chart before adjusting the resource values in Settings, thus said resources having been assigned to the pods instead of the Kubernetes api server. But I have not been able to fix this issue.
This is what I have tried thus far:
Restarting Docker again
Reset Kubernetes
Reset Docker to factory settings
Deleting the VM in hyper-v and restarting Docker
Uninstalling and reinstalling Docker Desktop
Deleting the pki folder and restart Docker
Set the Environment Variable for KUBECONFIG
Deleting .kube/config and restart
Another clean reinstall of Docker Desktop
But Kubernetes does not complete its startup, so I still get Unable to connect to the server: EOF in response.
Is there anything I haven't tried yet?
I'll share that what solved this for me was Docker Desktop settings feature for "reset kubernetes cluster". I know that #shenyongo said that a "reset kubernetes" didn't work, and I suppose they mean this.
But for the sake of other readers who may find this, I had this same error message (with Docker Desktop on Windows 11, using wsl2), and the solution for me was indeed to do this:
open the Settings page (in Docker Desktop--right-click on it in the status tray)
then choose "Kubernetes" on the left
then choose "reset kubernetes cluster"
Yes, that warns that "all stacks and kubernetes resources will be deleted", but as nothing else had worked for me (and I wasn't worried about losing much), I tried it, and it did the trick. In moments, all my k8s functionality was back to working.
As background, k8s had been working fine for me for some time. It was just that one day I found I was getting this error. I searched and searched and found lots of folks asking about it but not getting answers, let alone this answer. To be clear, like the OP here I had tried restarting Docker Desktop, restarting the host machine, even downloading and installing an available DD update (I was only a bit behind), and none of those worked. I didn't proceed to ALL the steps shenyongo did, as I thought I'd try this first, and the reset worked.
Hope that may help others. I realize some may fear losing something, but this helps stress the power of declarative vs imperative k8s configuration. It SHOULD be easy to recreate most everything if necessary. I realize it may not be so for everyone.

Resources