Red Hat AMQ 6 for Openshift - how to prevent extra split folder from being created - openshift-3

I am modifying our app based on Red Hat AMQ 6 for Openshift. I added a script before the original entrypoint of the image, "launch.sh", to do some manual work. It takes a while, maybe 1 or 2 min, much longer than normal startup, and I suspect that that is why the "split-2" folder now always appear when the app is running, but I never understand why split folder appears.
I can do manual draining to migrate afterwards, but I want to prevent the split-2 from appearing.
Do I increase some timeout value? What is the config key? AMQ_SPLIT is always true from the beginning, I cannot change that. Has AMQ_LOCK_TIMEOUT have something to do with this behaviour? Now it's 60.

Related

MariaDB settings in Docker

A while back I created an instance of mariadb inside a docker container on a machine running Ubuntu. I've since learned that I'll need to update some settings to keep things running smoothly, but when I created the image, I did not specify any .cnf volumes. How do I update/create a .cnf file for this image? I'm a complete newb when it comes to docker, so please spoon-feed me.
I've tried accessing the file from within the image, but there are no text editors.
The defaults of MariaDB work pretty much out of the box (container) for small instances. You should only need to change setting when problems occur.
If you have spare memory you can increase your innodb_buffer_pool_size.
With the mariadb container, you don't need to edit the .cnf files, you can just add a few options on the command line per the docs (that you should defiantly read).
Recommend using the defaults for a while, and if you encounter problems, include a new question on dba.stackexchange.com that includes show global status output and specifics on the queries that are slow (show create table TBLNAME / explain QUERY).

Vaadin: How do I get rid of vite?

The description of "Vite" was tempting and I was stupid enough to enable this new feature. Since then I am stuck with endless UI recompile loops! I.e. each time after I logged into my application the frontend gets recompiled AGAIN and the application restarts. ||-(
Disabling Vite in the lower right control-dialog is not accepted, it remains activated. How do I get rid of this unbaked feature again?
This is using Vaadin 23.1.7 and Java 17.
In Vaadin 23.1 you can remove the feature flag by deleting it from src/main/resources/vaadin-featureflags.properties.
Note that Vaadin 23.2 uses Vite by default. If you want to continue using Webpack going forward, you need to instead add this feature flag to the properties file:
com.vaadin.experimental.webpackForFrontendBuild=true

Enable k8s experimental features in Docker Desktop

does anyone know if this is possible?
All I can find in docs is reference to enabling docker experimental features, but not the kubernetes experimental features.
I tried this, but still get error.
k alpha debug -it exchange-pricing-865d579659-s8x6d --image=busybox --target=exchange-pricing-865d579659-s8x6d
error: ephemeral containers are disabled for this cluster (error from server: "the server could not find the requested resource").
Thanks
I had the same intent (as have others in this feature request). After several hours of trial and error, I finally found out a way to do so.
Steps:
Depending on which file you're trying to edit, you may need to fully shut down Docker Desktop, and restart WSL. (right-click tray-icon and press "Quit Docker Desktop", then run wsl --shutdown, then run wsl)
Open the [...]/kubeadm/manifests folder, in the Docker filesystem.
On Windows, navigate Windows Explorer to:
For Docker Desktop 4.2.0: \\wsl$\docker-desktop-data\version-pack-data\community\kubeadm\manifests
For Docker Desktop 4.11.0: \\wsl$\docker-desktop-data\data\kubeadm\manifests
Open the kube-controller-manager.yaml, kube-apiserver.yaml, and kube-scheduler.yaml files, adding the line below:
spec:
containers:
- command:
[...]
- --feature-gates=EphemeralContainers=true <-- add this line
Start Docker Desktop again.
It looks so easy when its already figured out, huh? Well trust me, it was a pain to find out.
Some of the slowdowns I hit:
It took me quite a while to even find those manifest files. (eventually found it using grepWin, searching through the whole \\wsl$\docker-desktop-data folder for any matches of a line I grabbed from the kube-apiserver-docker-desktop pod's config, which I viewed using Lens)
Once I found it, I got confused by this documentation. When I read FEATURE STATE: Kubernetes v1.22 [alpha], I thought that meant you needed version 1.22 or higher of Kubernetes for the feature to be available. This caused a huge wild goose chase where I tried to change the version of Kubernetes that was being launched in Docker Desktop, which Docker Desktop didn't seem to like. (in retrospect, the issue may have just been the minor one in point 3 below...)
When I first made changes to the manifest files, I was using Notepad++. And despite my liking Notepad++, it's apparently not quite as smart as vscode in the following regard: it does not automatically detect the indentation type for yaml files. Thus, when I pressed tab to create an indent, so I could add the new flag to the argument list, it added it as a tab character rather than spaces. This caused Kubernetes to fail reading of the file. That might not be so bad if Kubernetes gave a sane error message for that, but instead it merely gave the message unexpected EOF. And I didn't even see that error message at first because it was not being propagated to the kube-controller-manager-docker-desktop pod (which was the only relevant one that wasn't immediately erroring/closing). Anyway, I didn't realize this was the problem at the time, so...
I decided to try bypassing the manifest-files and applying my modification to the etcd data-store directly. In retrospect, this was not a good idea, because the etcd data-store is pretty complex, the tooling is substandard, and the documentation is substandard. I spent a ton of time just trying to figure out how to send commands to read and write data to it (eventually managed to do so by calling etcdctl within the etcd-docker-desktop pod). I spent further time still writing up a NodeJS script capable of reading all the data as JSON, storing it in a dump file, and being able to write changes to entries back despite there being 3+ levels of quoting involved (I eventually was able to use stdin to pass the value rather than as part of the command string, to avoid quotation-mark-inception). After all the work on etcd reading/writing above, I found it didn't work anyway because Kubernetes invariably "breaks" if anyone else writes to its etcd data-store. (even if you write the exact same value that had been there before -- as verified by comparing the dumps before and after)
After all of the above, I decided to have one last go with just adding the flags to mentioned manifest files. Was still getting the startup failure/error, but at the very end, I decided I wanted to see exactly what about my changes was causing Kubernetes to reject them. So I tried commenting out my added line; the error remained. I thought maybe it was a checksum-based rejection then. But then I thought, maybe the YAML parser that Kubernetes is using is just outdated and is finicky about what comments it is able to recognize. So I tried moving the comment around to different places, and was puzzled when the manifest was being accepted just by moving the comment to the root level. I moved it back to various locations, with it working and not working, until I thought to try making the line "half-indented" since it's "in-between" the working and non-working versions. That's when I noticed the line had a tab as its indent. And then it hit me; are the other lines also using tabs? I checked, and nope, they were using spaces. And that's when I realized I had wasted the last few hours on something I coulda just fixed with a simple indent change.
The moral of the story for some is that YAML is a bad configuration format, because it makes it easy to make trivial errors like this. But I actually place the blame more on whatever parser Kubernetes is using for the YAML files; it is unacceptable that a YAML parser would encounter an indentation mismatch and give a message so generic as unexpected EOF. I don't know what the identity of that YAML parser is, but I'm tired enough of the subject that I'm not even going to look into it right now. If one of you finds it, please make an issue report for it -- perhaps including this story as a real-world example of the pain that ambiguous error messages can cause.
Since Ephemeral Containers is still an alpha feature, it is disabled by default.
As you can read here, for this to work, it requires the EphemeralContainers feature gate to be enabled, and Kubernetes client and server version v1.16 or later.
As to the 2nd requirement I assume both your Kuberntes server and client versions are v1.16 or later but it looks like, for the time being, the 1st requirement cannot be met on Docker Desktop. According to this issue, it currently doesn't support enabling Feature Gates.
However you may still try to ssh to your master node and edit the following files:
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
by adding inside the command section:
--feature-gates=EphemeralContainers=true
Then you need to delete those pods so they are recreated with new settings applied. You'll find them by running:
kubectl get pods -n kube-system

Gitlab Pages delivers random content

I am experiencing weird behavior with the Pages feature of GitLab Omnibus package running on an Ubuntu 16.04 virtual machine. Some projects use Pages with Jekyll built by GitLab CI, which has been working as expected since it was first published with Gitlab CE.
For a couple of days now, visiting any of the homepages of those sites shows the content of just one of the projects. Each of them should of course show different content, but they all show the same. Even stranger: the content shown on each of the sites changes over time to one of the other projects, and I can not see whether this is deterministic.
Restarting the build processes of each of the projects did not fix this, neither did gitlab-ctl reconfigure, stop and start, nor rebooting the entire VM.
To investigate on that issue, I edited (which I assume is) the resulting file of the build process at /var/opt/gitlab/gitlab-rails/shared/pages/www/www.domain.org/public/index.html. Not in the first place, but later on during the already stated "rotating" content, the edits showed up on the webpage.
So what is going on there? Is this some caching issue? Is it malconfiguration? Is it a bug? Please help me find and fix the problem, as those are production websites.
Looks like this is actually an issue

Jenkins quits at midnight

I am running Jenkins from cmd, not as a service, because I need to do GUI testing. It works fine when I start it up and I can do everything I want. But I schedule a task around 4am. and when I come back Jenkins didn't last till 4am. From the console, it seems to just quit at 12am.
First I thought it's computer environment problem. But it still happens after I change my computer to never sleep, and I put the hard drive to never sleep as well. I locked my computer around 7pm. But it seems to continue running until 12am.
Any idea on what is happening?
Look at Jenkins logs. They're somewhere either in the directory where jenkins.jar was or in a subdirectoy, look for jenkins*out* and jenkins*err* files with right time stamp. I can't check exact location and names right now, sorry.
Seems you were running jenkins from C:\, congrats for cluttering your C drive root ;). To help clean it up, copy jenkins.war to C:\jenkins\ or something and run it again to see what all it creates under there, so you know what to clean up.
Also, running it from C drive root might have somehow interfered with some Windows maintenance task or something, which caused it to abort.

Resources